Test Report: KVM_Linux 17764

                    
                      47aff3550d8f737faf92680522e584556adb8789:2023-12-12:32246
                    
                

Test fail (2/323)

Order failed test Duration
230 TestMultiNode/serial/RestartMultiNode 87.1
315 TestNetworkPlugins/group/bridge/Start 60.11
x
+
TestMultiNode/serial/RestartMultiNode (87.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-859606 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-859606 --wait=true -v=8 --alsologtostderr --driver=kvm2 : exit status 90 (1m24.715053505s)

                                                
                                                
-- stdout --
	* [multinode-859606] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node multinode-859606 in cluster multinode-859606
	* Restarting existing kvm2 VM for "multinode-859606" ...
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-859606-m02 in cluster multinode-859606
	* Restarting existing kvm2 VM for "multinode-859606-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.40
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:36:19.566152  104530 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:36:19.566265  104530 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:36:19.566273  104530 out.go:309] Setting ErrFile to fd 2...
	I1212 00:36:19.566277  104530 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:36:19.566462  104530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
	I1212 00:36:19.566987  104530 out.go:303] Setting JSON to false
	I1212 00:36:19.567880  104530 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11880,"bootTime":1702329500,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:36:19.567966  104530 start.go:138] virtualization: kvm guest
	I1212 00:36:19.570536  104530 out.go:177] * [multinode-859606] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:36:19.572060  104530 notify.go:220] Checking for updates...
	I1212 00:36:19.572071  104530 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:36:19.573648  104530 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:36:19.575043  104530 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:36:19.576502  104530 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	I1212 00:36:19.578073  104530 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:36:19.579463  104530 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:36:19.581288  104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:36:19.581767  104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:36:19.581821  104530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:36:19.596096  104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I1212 00:36:19.596488  104530 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:36:19.597060  104530 main.go:141] libmachine: Using API Version  1
	I1212 00:36:19.597091  104530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:36:19.597481  104530 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:36:19.597646  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:19.597948  104530 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:36:19.598247  104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:36:19.598293  104530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:36:19.612639  104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43365
	I1212 00:36:19.613044  104530 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:36:19.613494  104530 main.go:141] libmachine: Using API Version  1
	I1212 00:36:19.613515  104530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:36:19.613814  104530 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:36:19.613998  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:19.648526  104530 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:36:19.650074  104530 start.go:298] selected driver: kvm2
	I1212 00:36:19.650086  104530 start.go:902] validating driver "kvm2" against &{Name:multinode-859606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-859606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false ku
beflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:36:19.650266  104530 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:36:19.650710  104530 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:36:19.650794  104530 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17764-80294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:36:19.664949  104530 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 00:36:19.665848  104530 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:36:19.665938  104530 cni.go:84] Creating CNI manager for ""
	I1212 00:36:19.665955  104530 cni.go:136] 2 nodes found, recommending kindnet
	I1212 00:36:19.665965  104530 start_flags.go:323] config:
	{Name:multinode-859606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-859606 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false
nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:36:19.666224  104530 iso.go:125] acquiring lock: {Name:mk9f395cbf4246894893bf64341667bb412992c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:36:19.668183  104530 out.go:177] * Starting control plane node multinode-859606 in cluster multinode-859606
	I1212 00:36:19.669663  104530 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 00:36:19.669706  104530 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 00:36:19.669717  104530 cache.go:56] Caching tarball of preloaded images
	I1212 00:36:19.669796  104530 preload.go:174] Found /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 00:36:19.669808  104530 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 00:36:19.669923  104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
	I1212 00:36:19.670107  104530 start.go:365] acquiring machines lock for multinode-859606: {Name:mk381e91746c2e5b8a4620fe3fd447d80375e413 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:36:19.670157  104530 start.go:369] acquired machines lock for "multinode-859606" in 32.405µs
	I1212 00:36:19.670175  104530 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:36:19.670183  104530 fix.go:54] fixHost starting: 
	I1212 00:36:19.670424  104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:36:19.670455  104530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:36:19.684474  104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I1212 00:36:19.684891  104530 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:36:19.685333  104530 main.go:141] libmachine: Using API Version  1
	I1212 00:36:19.685356  104530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:36:19.685644  104530 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:36:19.685828  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:19.685946  104530 main.go:141] libmachine: (multinode-859606) Calling .GetState
	I1212 00:36:19.687411  104530 fix.go:102] recreateIfNeeded on multinode-859606: state=Stopped err=<nil>
	I1212 00:36:19.687443  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	W1212 00:36:19.687615  104530 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 00:36:19.689763  104530 out.go:177] * Restarting existing kvm2 VM for "multinode-859606" ...
	I1212 00:36:19.691324  104530 main.go:141] libmachine: (multinode-859606) Calling .Start
	I1212 00:36:19.691550  104530 main.go:141] libmachine: (multinode-859606) Ensuring networks are active...
	I1212 00:36:19.692253  104530 main.go:141] libmachine: (multinode-859606) Ensuring network default is active
	I1212 00:36:19.692574  104530 main.go:141] libmachine: (multinode-859606) Ensuring network mk-multinode-859606 is active
	I1212 00:36:19.692847  104530 main.go:141] libmachine: (multinode-859606) Getting domain xml...
	I1212 00:36:19.693505  104530 main.go:141] libmachine: (multinode-859606) Creating domain...
	I1212 00:36:20.929419  104530 main.go:141] libmachine: (multinode-859606) Waiting to get IP...
	I1212 00:36:20.930523  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:20.930912  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:20.931040  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:20.930906  104565 retry.go:31] will retry after 273.212272ms: waiting for machine to come up
	I1212 00:36:21.205460  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:21.205872  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:21.205901  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:21.205852  104565 retry.go:31] will retry after 326.892458ms: waiting for machine to come up
	I1212 00:36:21.534529  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:21.534921  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:21.534943  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:21.534891  104565 retry.go:31] will retry after 343.135816ms: waiting for machine to come up
	I1212 00:36:21.879459  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:21.879929  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:21.879953  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:21.879870  104565 retry.go:31] will retry after 589.671783ms: waiting for machine to come up
	I1212 00:36:22.471637  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:22.472097  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:22.472120  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:22.472073  104565 retry.go:31] will retry after 637.139279ms: waiting for machine to come up
	I1212 00:36:23.110881  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:23.111236  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:23.111267  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:23.111178  104565 retry.go:31] will retry after 745.620292ms: waiting for machine to come up
	I1212 00:36:23.858157  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:23.858677  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:23.858707  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:23.858634  104565 retry.go:31] will retry after 1.181130732s: waiting for machine to come up
	I1212 00:36:25.041534  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:25.041972  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:25.042004  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:25.041923  104565 retry.go:31] will retry after 1.339637741s: waiting for machine to come up
	I1212 00:36:26.383605  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:26.383992  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:26.384019  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:26.383923  104565 retry.go:31] will retry after 1.520765812s: waiting for machine to come up
	I1212 00:36:27.906937  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:27.907387  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:27.907415  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:27.907357  104565 retry.go:31] will retry after 1.874600317s: waiting for machine to come up
	I1212 00:36:29.783675  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:29.784134  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:29.784174  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:29.784075  104565 retry.go:31] will retry after 2.274077714s: waiting for machine to come up
	I1212 00:36:32.061527  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:32.061959  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:32.061986  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:32.061913  104565 retry.go:31] will retry after 3.21102487s: waiting for machine to come up
	I1212 00:36:35.274900  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:35.275327  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:35.275356  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:35.275295  104565 retry.go:31] will retry after 4.00191762s: waiting for machine to come up
	I1212 00:36:39.281352  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.281835  104530 main.go:141] libmachine: (multinode-859606) Found IP for machine: 192.168.39.40
	I1212 00:36:39.281858  104530 main.go:141] libmachine: (multinode-859606) Reserving static IP address...
	I1212 00:36:39.281874  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has current primary IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.282305  104530 main.go:141] libmachine: (multinode-859606) Reserved static IP address: 192.168.39.40
	I1212 00:36:39.282362  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "multinode-859606", mac: "52:54:00:16:26:7f", ip: "192.168.39.40"} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.282382  104530 main.go:141] libmachine: (multinode-859606) Waiting for SSH to be available...
	I1212 00:36:39.282413  104530 main.go:141] libmachine: (multinode-859606) DBG | skip adding static IP to network mk-multinode-859606 - found existing host DHCP lease matching {name: "multinode-859606", mac: "52:54:00:16:26:7f", ip: "192.168.39.40"}
	I1212 00:36:39.282430  104530 main.go:141] libmachine: (multinode-859606) DBG | Getting to WaitForSSH function...
	I1212 00:36:39.284738  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.285057  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.285110  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.285169  104530 main.go:141] libmachine: (multinode-859606) DBG | Using SSH client type: external
	I1212 00:36:39.285210  104530 main.go:141] libmachine: (multinode-859606) DBG | Using SSH private key: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa (-rw-------)
	I1212 00:36:39.285247  104530 main.go:141] libmachine: (multinode-859606) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:36:39.285259  104530 main.go:141] libmachine: (multinode-859606) DBG | About to run SSH command:
	I1212 00:36:39.285268  104530 main.go:141] libmachine: (multinode-859606) DBG | exit 0
	I1212 00:36:39.375522  104530 main.go:141] libmachine: (multinode-859606) DBG | SSH cmd err, output: <nil>: 
	I1212 00:36:39.375955  104530 main.go:141] libmachine: (multinode-859606) Calling .GetConfigRaw
	I1212 00:36:39.376683  104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
	I1212 00:36:39.379083  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.379448  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.379483  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.379735  104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
	I1212 00:36:39.379953  104530 machine.go:88] provisioning docker machine ...
	I1212 00:36:39.379970  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:39.380177  104530 main.go:141] libmachine: (multinode-859606) Calling .GetMachineName
	I1212 00:36:39.380335  104530 buildroot.go:166] provisioning hostname "multinode-859606"
	I1212 00:36:39.380350  104530 main.go:141] libmachine: (multinode-859606) Calling .GetMachineName
	I1212 00:36:39.380483  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:39.382706  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.383084  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.383109  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.383231  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:39.383413  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.383548  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.383686  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:39.383852  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:36:39.384221  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1212 00:36:39.384236  104530 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-859606 && echo "multinode-859606" | sudo tee /etc/hostname
	I1212 00:36:39.519767  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-859606
	
	I1212 00:36:39.519800  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:39.522378  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.522790  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.522832  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.522956  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:39.523177  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.523364  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.523491  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:39.523659  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:36:39.523993  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1212 00:36:39.524011  104530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-859606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-859606/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-859606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:36:39.656285  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:36:39.656370  104530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17764-80294/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-80294/.minikube}
	I1212 00:36:39.656408  104530 buildroot.go:174] setting up certificates
	I1212 00:36:39.656417  104530 provision.go:83] configureAuth start
	I1212 00:36:39.656432  104530 main.go:141] libmachine: (multinode-859606) Calling .GetMachineName
	I1212 00:36:39.656702  104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
	I1212 00:36:39.659384  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.659735  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.659764  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.659868  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:39.662155  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.662517  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.662547  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.662670  104530 provision.go:138] copyHostCerts
	I1212 00:36:39.662701  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
	I1212 00:36:39.662745  104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem, removing ...
	I1212 00:36:39.662764  104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
	I1212 00:36:39.662840  104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem (1078 bytes)
	I1212 00:36:39.662932  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
	I1212 00:36:39.662954  104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem, removing ...
	I1212 00:36:39.662963  104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
	I1212 00:36:39.662998  104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem (1123 bytes)
	I1212 00:36:39.663072  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
	I1212 00:36:39.663106  104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem, removing ...
	I1212 00:36:39.663115  104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
	I1212 00:36:39.663149  104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem (1679 bytes)
	I1212 00:36:39.663211  104530 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem org=jenkins.multinode-859606 san=[192.168.39.40 192.168.39.40 localhost 127.0.0.1 minikube multinode-859606]
	I1212 00:36:39.752771  104530 provision.go:172] copyRemoteCerts
	I1212 00:36:39.752840  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:36:39.752864  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:39.755641  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.755981  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.756012  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.756148  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:39.756362  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.756505  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:39.756620  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
	I1212 00:36:39.848757  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:36:39.848827  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:36:39.872145  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:36:39.872230  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:36:39.895524  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:36:39.895625  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 00:36:39.919081  104530 provision.go:86] duration metric: configureAuth took 262.648578ms
	I1212 00:36:39.919117  104530 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:36:39.919362  104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:36:39.919392  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:39.919652  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:39.922322  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.922662  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.922694  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.922873  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:39.923053  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.923205  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.923322  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:39.923479  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:36:39.923797  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1212 00:36:39.923808  104530 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 00:36:40.049654  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 00:36:40.049683  104530 buildroot.go:70] root file system type: tmpfs
	I1212 00:36:40.049826  104530 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 00:36:40.049854  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:40.052273  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:40.052615  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:40.052648  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:40.052798  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:40.053014  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:40.053178  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:40.053328  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:40.053470  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:36:40.053822  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1212 00:36:40.053890  104530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 00:36:40.188800  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 00:36:40.188832  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:40.191559  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:40.191974  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:40.192007  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:40.192190  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:40.192371  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:40.192563  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:40.192665  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:40.192866  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:36:40.193267  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1212 00:36:40.193286  104530 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 00:36:41.206767  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 00:36:41.206800  104530 machine.go:91] provisioned docker machine in 1.826833328s
	I1212 00:36:41.206817  104530 start.go:300] post-start starting for "multinode-859606" (driver="kvm2")
	I1212 00:36:41.206830  104530 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:36:41.206852  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:41.207178  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:36:41.207202  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:41.209997  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.210348  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:41.210381  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.210498  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:41.210690  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:41.210833  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:41.210981  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
	I1212 00:36:41.301876  104530 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:36:41.306227  104530 command_runner.go:130] > NAME=Buildroot
	I1212 00:36:41.306246  104530 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 00:36:41.306250  104530 command_runner.go:130] > ID=buildroot
	I1212 00:36:41.306262  104530 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 00:36:41.306266  104530 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 00:36:41.306469  104530 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 00:36:41.306487  104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/addons for local assets ...
	I1212 00:36:41.306534  104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/files for local assets ...
	I1212 00:36:41.306599  104530 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> 876092.pem in /etc/ssl/certs
	I1212 00:36:41.306609  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> /etc/ssl/certs/876092.pem
	I1212 00:36:41.306693  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:36:41.315869  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /etc/ssl/certs/876092.pem (1708 bytes)
	I1212 00:36:41.338667  104530 start.go:303] post-start completed in 131.83456ms
	I1212 00:36:41.338691  104530 fix.go:56] fixHost completed within 21.668507657s
	I1212 00:36:41.338718  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:41.341292  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.341664  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:41.341694  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.341888  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:41.342101  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:41.342241  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:41.342408  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:41.342541  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:36:41.342886  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1212 00:36:41.342902  104530 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:36:41.468622  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702341401.415199028
	
	I1212 00:36:41.468653  104530 fix.go:206] guest clock: 1702341401.415199028
	I1212 00:36:41.468663  104530 fix.go:219] Guest: 2023-12-12 00:36:41.415199028 +0000 UTC Remote: 2023-12-12 00:36:41.338694258 +0000 UTC m=+21.821939649 (delta=76.50477ms)
	I1212 00:36:41.468688  104530 fix.go:190] guest clock delta is within tolerance: 76.50477ms
	I1212 00:36:41.468695  104530 start.go:83] releasing machines lock for "multinode-859606", held for 21.798528151s
	I1212 00:36:41.468721  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:41.469036  104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
	I1212 00:36:41.471587  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.471996  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:41.472029  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.472196  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:41.472679  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:41.472871  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:41.472969  104530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:36:41.473018  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:41.473104  104530 ssh_runner.go:195] Run: cat /version.json
	I1212 00:36:41.473135  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:41.475372  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.475531  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.475739  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:41.475765  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.475949  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:41.475965  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:41.475979  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.476148  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:41.476167  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:41.476322  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:41.476325  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:41.476507  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:41.476503  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
	I1212 00:36:41.476677  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
	I1212 00:36:41.586671  104530 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:36:41.587519  104530 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 00:36:41.587648  104530 ssh_runner.go:195] Run: systemctl --version
	I1212 00:36:41.593336  104530 command_runner.go:130] > systemd 247 (247)
	I1212 00:36:41.593360  104530 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 00:36:41.593423  104530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:36:41.598984  104530 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:36:41.599019  104530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:36:41.599060  104530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:36:41.614960  104530 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 00:36:41.614996  104530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:36:41.615008  104530 start.go:475] detecting cgroup driver to use...
	I1212 00:36:41.615155  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:36:41.631749  104530 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 00:36:41.632091  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 00:36:41.642135  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:36:41.651964  104530 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:36:41.652033  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:36:41.661909  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:36:41.672216  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:36:41.681323  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:36:41.691358  104530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:36:41.701487  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:36:41.711473  104530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:36:41.720346  104530 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:36:41.720490  104530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:36:41.729603  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:41.829613  104530 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:36:41.846807  104530 start.go:475] detecting cgroup driver to use...
	I1212 00:36:41.846894  104530 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 00:36:41.859661  104530 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 00:36:41.860603  104530 command_runner.go:130] > [Unit]
	I1212 00:36:41.860621  104530 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 00:36:41.860629  104530 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 00:36:41.860638  104530 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 00:36:41.860648  104530 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 00:36:41.860662  104530 command_runner.go:130] > StartLimitBurst=3
	I1212 00:36:41.860671  104530 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 00:36:41.860679  104530 command_runner.go:130] > [Service]
	I1212 00:36:41.860686  104530 command_runner.go:130] > Type=notify
	I1212 00:36:41.860694  104530 command_runner.go:130] > Restart=on-failure
	I1212 00:36:41.860715  104530 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 00:36:41.860734  104530 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 00:36:41.860748  104530 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 00:36:41.860757  104530 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 00:36:41.860767  104530 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 00:36:41.860781  104530 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 00:36:41.860791  104530 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 00:36:41.860803  104530 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 00:36:41.860812  104530 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 00:36:41.860818  104530 command_runner.go:130] > ExecStart=
	I1212 00:36:41.860837  104530 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1212 00:36:41.860845  104530 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 00:36:41.860854  104530 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 00:36:41.860863  104530 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 00:36:41.860867  104530 command_runner.go:130] > LimitNOFILE=infinity
	I1212 00:36:41.860872  104530 command_runner.go:130] > LimitNPROC=infinity
	I1212 00:36:41.860876  104530 command_runner.go:130] > LimitCORE=infinity
	I1212 00:36:41.860881  104530 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 00:36:41.860886  104530 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 00:36:41.860893  104530 command_runner.go:130] > TasksMax=infinity
	I1212 00:36:41.860897  104530 command_runner.go:130] > TimeoutStartSec=0
	I1212 00:36:41.860903  104530 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 00:36:41.860907  104530 command_runner.go:130] > Delegate=yes
	I1212 00:36:41.860912  104530 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 00:36:41.860916  104530 command_runner.go:130] > KillMode=process
	I1212 00:36:41.860921  104530 command_runner.go:130] > [Install]
	I1212 00:36:41.860934  104530 command_runner.go:130] > WantedBy=multi-user.target
	I1212 00:36:41.861408  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:36:41.875266  104530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:36:41.894559  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:36:41.907084  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:36:41.919502  104530 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:36:41.951570  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:36:41.963632  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:36:41.980713  104530 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 00:36:41.980788  104530 ssh_runner.go:195] Run: which cri-dockerd
	I1212 00:36:41.984334  104530 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 00:36:41.984645  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 00:36:41.993852  104530 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 00:36:42.009538  104530 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 00:36:42.118265  104530 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 00:36:42.228976  104530 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 00:36:42.229126  104530 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 00:36:42.245311  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:42.345292  104530 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 00:36:43.830127  104530 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.484785426s)
	I1212 00:36:43.830211  104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 00:36:43.943279  104530 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 00:36:44.053942  104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 00:36:44.164844  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:44.275934  104530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 00:36:44.291963  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:44.392776  104530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 00:36:44.474244  104530 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 00:36:44.474311  104530 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 00:36:44.480515  104530 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 00:36:44.480535  104530 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:36:44.480541  104530 command_runner.go:130] > Device: 16h/22d	Inode: 819         Links: 1
	I1212 00:36:44.480548  104530 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 00:36:44.480554  104530 command_runner.go:130] > Access: 2023-12-12 00:36:44.352977075 +0000
	I1212 00:36:44.480559  104530 command_runner.go:130] > Modify: 2023-12-12 00:36:44.352977075 +0000
	I1212 00:36:44.480564  104530 command_runner.go:130] > Change: 2023-12-12 00:36:44.355977075 +0000
	I1212 00:36:44.480567  104530 command_runner.go:130] >  Birth: -
	I1212 00:36:44.480717  104530 start.go:543] Will wait 60s for crictl version
	I1212 00:36:44.480773  104530 ssh_runner.go:195] Run: which crictl
	I1212 00:36:44.484627  104530 command_runner.go:130] > /usr/bin/crictl
	I1212 00:36:44.484837  104530 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:36:44.546652  104530 command_runner.go:130] > Version:  0.1.0
	I1212 00:36:44.546684  104530 command_runner.go:130] > RuntimeName:  docker
	I1212 00:36:44.546692  104530 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 00:36:44.546719  104530 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:36:44.548311  104530 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 00:36:44.548389  104530 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 00:36:44.576456  104530 command_runner.go:130] > 24.0.7
	I1212 00:36:44.576586  104530 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 00:36:44.599730  104530 command_runner.go:130] > 24.0.7
	I1212 00:36:44.602571  104530 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 00:36:44.602615  104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
	I1212 00:36:44.605105  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:44.605567  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:44.605594  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:44.605828  104530 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:36:44.609867  104530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:36:44.622768  104530 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 00:36:44.622818  104530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 00:36:44.642692  104530 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 00:36:44.642720  104530 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 00:36:44.642729  104530 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 00:36:44.642749  104530 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 00:36:44.642756  104530 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1212 00:36:44.642764  104530 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 00:36:44.642773  104530 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 00:36:44.642785  104530 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 00:36:44.642793  104530 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:36:44.642804  104530 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1212 00:36:44.642841  104530 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1212 00:36:44.642858  104530 docker.go:601] Images already preloaded, skipping extraction
	I1212 00:36:44.642930  104530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 00:36:44.661008  104530 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 00:36:44.661047  104530 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 00:36:44.661054  104530 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 00:36:44.661062  104530 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 00:36:44.661068  104530 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1212 00:36:44.661084  104530 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 00:36:44.661093  104530 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 00:36:44.661108  104530 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 00:36:44.661116  104530 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:36:44.661126  104530 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1212 00:36:44.661894  104530 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1212 00:36:44.661911  104530 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:36:44.661965  104530 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 00:36:44.688198  104530 command_runner.go:130] > cgroupfs
	I1212 00:36:44.688431  104530 cni.go:84] Creating CNI manager for ""
	I1212 00:36:44.688451  104530 cni.go:136] 2 nodes found, recommending kindnet
	I1212 00:36:44.688483  104530 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:36:44.688527  104530 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.40 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-859606 NodeName:multinode-859606 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.40"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.40 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:36:44.688714  104530 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.40
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-859606"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.40
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.40"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:36:44.688816  104530 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-859606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-859606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 00:36:44.688879  104530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 00:36:44.697808  104530 command_runner.go:130] > kubeadm
	I1212 00:36:44.697826  104530 command_runner.go:130] > kubectl
	I1212 00:36:44.697831  104530 command_runner.go:130] > kubelet
	I1212 00:36:44.697894  104530 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:36:44.697957  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:36:44.705971  104530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 00:36:44.720935  104530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:36:44.735886  104530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1212 00:36:44.751846  104530 ssh_runner.go:195] Run: grep 192.168.39.40	control-plane.minikube.internal$ /etc/hosts
	I1212 00:36:44.755479  104530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.40	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:36:44.767240  104530 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606 for IP: 192.168.39.40
	I1212 00:36:44.767277  104530 certs.go:190] acquiring lock for shared ca certs: {Name:mk30ad7b34272eb8ac2c2d0da18d8d4f87fa28a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:36:44.767442  104530 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.key
	I1212 00:36:44.767492  104530 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.key
	I1212 00:36:44.767569  104530 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.key
	I1212 00:36:44.767614  104530 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.key.7fcbe345
	I1212 00:36:44.767658  104530 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.key
	I1212 00:36:44.767671  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:36:44.767685  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:36:44.767697  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:36:44.767709  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:36:44.767723  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:36:44.767736  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:36:44.767748  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:36:44.767759  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:36:44.767806  104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609.pem (1338 bytes)
	W1212 00:36:44.767833  104530 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609_empty.pem, impossibly tiny 0 bytes
	I1212 00:36:44.767842  104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:36:44.767866  104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:36:44.767895  104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:36:44.767941  104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem (1679 bytes)
	I1212 00:36:44.767991  104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem (1708 bytes)
	I1212 00:36:44.768017  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> /usr/share/ca-certificates/876092.pem
	I1212 00:36:44.768033  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:44.768048  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609.pem -> /usr/share/ca-certificates/87609.pem
	I1212 00:36:44.768657  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:36:44.791629  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:36:44.814579  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:36:44.837176  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:36:44.859769  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:36:44.882517  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:36:44.905279  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:36:44.927814  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:36:44.950936  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /usr/share/ca-certificates/876092.pem (1708 bytes)
	I1212 00:36:44.973314  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:36:44.995879  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609.pem --> /usr/share/ca-certificates/87609.pem (1338 bytes)
	I1212 00:36:45.018814  104530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:36:45.034741  104530 ssh_runner.go:195] Run: openssl version
	I1212 00:36:45.040084  104530 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 00:36:45.040159  104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:36:45.049710  104530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:45.054223  104530 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:45.054253  104530 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:45.054292  104530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:45.059527  104530 command_runner.go:130] > b5213941
	I1212 00:36:45.059696  104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:36:45.069012  104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/87609.pem && ln -fs /usr/share/ca-certificates/87609.pem /etc/ssl/certs/87609.pem"
	I1212 00:36:45.078693  104530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/87609.pem
	I1212 00:36:45.083070  104530 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:16 /usr/share/ca-certificates/87609.pem
	I1212 00:36:45.083289  104530 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:16 /usr/share/ca-certificates/87609.pem
	I1212 00:36:45.083354  104530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/87609.pem
	I1212 00:36:45.089122  104530 command_runner.go:130] > 51391683
	I1212 00:36:45.089194  104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/87609.pem /etc/ssl/certs/51391683.0"
	I1212 00:36:45.099154  104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/876092.pem && ln -fs /usr/share/ca-certificates/876092.pem /etc/ssl/certs/876092.pem"
	I1212 00:36:45.108823  104530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/876092.pem
	I1212 00:36:45.113316  104530 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:16 /usr/share/ca-certificates/876092.pem
	I1212 00:36:45.113568  104530 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:16 /usr/share/ca-certificates/876092.pem
	I1212 00:36:45.113613  104530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/876092.pem
	I1212 00:36:45.118966  104530 command_runner.go:130] > 3ec20f2e
	I1212 00:36:45.119043  104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/876092.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:36:45.128635  104530 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:36:45.132978  104530 command_runner.go:130] > ca.crt
	I1212 00:36:45.132994  104530 command_runner.go:130] > ca.key
	I1212 00:36:45.133000  104530 command_runner.go:130] > healthcheck-client.crt
	I1212 00:36:45.133004  104530 command_runner.go:130] > healthcheck-client.key
	I1212 00:36:45.133008  104530 command_runner.go:130] > peer.crt
	I1212 00:36:45.133014  104530 command_runner.go:130] > peer.key
	I1212 00:36:45.133018  104530 command_runner.go:130] > server.crt
	I1212 00:36:45.133022  104530 command_runner.go:130] > server.key
	I1212 00:36:45.133062  104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:36:45.138700  104530 command_runner.go:130] > Certificate will not expire
	I1212 00:36:45.138753  104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:36:45.143928  104530 command_runner.go:130] > Certificate will not expire
	I1212 00:36:45.143989  104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:36:45.149974  104530 command_runner.go:130] > Certificate will not expire
	I1212 00:36:45.150040  104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:36:45.155645  104530 command_runner.go:130] > Certificate will not expire
	I1212 00:36:45.155702  104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:36:45.161120  104530 command_runner.go:130] > Certificate will not expire
	I1212 00:36:45.161172  104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:36:45.166435  104530 command_runner.go:130] > Certificate will not expire
	I1212 00:36:45.166596  104530 kubeadm.go:404] StartCluster: {Name:multinode-859606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-859606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubev
irt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:36:45.166771  104530 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 00:36:45.186362  104530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:36:45.195450  104530 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 00:36:45.195478  104530 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 00:36:45.195486  104530 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 00:36:45.195492  104530 command_runner.go:130] > member
	I1212 00:36:45.195591  104530 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 00:36:45.195612  104530 kubeadm.go:636] restartCluster start
	I1212 00:36:45.195674  104530 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:36:45.205557  104530 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:45.205994  104530 kubeconfig.go:135] verify returned: extract IP: "multinode-859606" does not appear in /home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:36:45.206105  104530 kubeconfig.go:146] "multinode-859606" context is missing from /home/jenkins/minikube-integration/17764-80294/kubeconfig - will repair!
	I1212 00:36:45.206407  104530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-80294/kubeconfig: {Name:mkf7cdfdedbee22114abcb4b16af22e84438f3f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:36:45.206781  104530 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:36:45.207021  104530 kapi.go:59] client config for multinode-859606: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.key", CAFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:36:45.207626  104530 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 00:36:45.207759  104530 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:36:45.216109  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:45.216158  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:45.227128  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:45.227145  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:45.227181  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:45.237721  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:45.738433  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:45.738513  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:45.749916  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:46.238556  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:46.238626  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:46.249796  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:46.738436  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:46.738510  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:46.750275  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:47.238820  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:47.238918  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:47.250330  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:47.737880  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:47.737967  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:47.749173  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:48.238871  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:48.238981  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:48.250477  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:48.737907  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:48.737986  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:48.749969  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:49.238635  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:49.238729  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:49.250296  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:49.738397  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:49.738483  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:49.750014  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:50.238638  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:50.238725  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:50.250537  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:50.738104  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:50.738212  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:50.749728  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:51.238279  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:51.238383  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:51.249977  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:51.738590  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:51.738674  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:51.750353  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:52.237967  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:52.238033  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:52.249749  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:52.738311  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:52.738400  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:52.749734  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:53.238473  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:53.238570  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:53.249803  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:53.738439  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:53.738545  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:53.749846  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:54.238458  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:54.238551  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:54.250276  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:54.738396  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:54.738477  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:54.749594  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:55.216372  104530 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 00:36:55.216413  104530 kubeadm.go:1135] stopping kube-system containers ...
	I1212 00:36:55.216471  104530 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 00:36:55.242800  104530 command_runner.go:130] > abde5ad85d4a
	I1212 00:36:55.242825  104530 command_runner.go:130] > 6960e84b00b8
	I1212 00:36:55.242831  104530 command_runner.go:130] > 55413175770e
	I1212 00:36:55.242840  104530 command_runner.go:130] > 56fd6254d6e1
	I1212 00:36:55.242847  104530 command_runner.go:130] > b63a75f45416
	I1212 00:36:55.242852  104530 command_runner.go:130] > 19421dc21753
	I1212 00:36:55.242858  104530 command_runner.go:130] > ecfcbd586321
	I1212 00:36:55.242864  104530 command_runner.go:130] > 9767a413586e
	I1212 00:36:55.242869  104530 command_runner.go:130] > 4ba778c674f0
	I1212 00:36:55.242874  104530 command_runner.go:130] > 19f9d76e8f1c
	I1212 00:36:55.242880  104530 command_runner.go:130] > fc27b8583502
	I1212 00:36:55.242885  104530 command_runner.go:130] > a49117d4a4c8
	I1212 00:36:55.242891  104530 command_runner.go:130] > 5aa25d818283
	I1212 00:36:55.242897  104530 command_runner.go:130] > ed0cff49857f
	I1212 00:36:55.242904  104530 command_runner.go:130] > 510b18b7b6d6
	I1212 00:36:55.242914  104530 command_runner.go:130] > 34ac7e63ee51
	I1212 00:36:55.242922  104530 command_runner.go:130] > dc5d8378ca26
	I1212 00:36:55.242929  104530 command_runner.go:130] > 335bd2869121
	I1212 00:36:55.242939  104530 command_runner.go:130] > 10ca85c531dc
	I1212 00:36:55.242951  104530 command_runner.go:130] > dcead5249b2f
	I1212 00:36:55.242961  104530 command_runner.go:130] > c3360b039380
	I1212 00:36:55.242971  104530 command_runner.go:130] > 08edfeaa5cab
	I1212 00:36:55.242979  104530 command_runner.go:130] > 5c674269e2eb
	I1212 00:36:55.242986  104530 command_runner.go:130] > e80fc43dacae
	I1212 00:36:55.242994  104530 command_runner.go:130] > 547ce8660107
	I1212 00:36:55.243001  104530 command_runner.go:130] > 6fce6e649e1a
	I1212 00:36:55.243008  104530 command_runner.go:130] > 7db8deb95763
	I1212 00:36:55.243015  104530 command_runner.go:130] > fef547bfcef9
	I1212 00:36:55.243026  104530 command_runner.go:130] > afcf416fd476
	I1212 00:36:55.243035  104530 command_runner.go:130] > d42aca9dd643
	I1212 00:36:55.243041  104530 command_runner.go:130] > 757215f5e48f
	I1212 00:36:55.243048  104530 command_runner.go:130] > f785241ab5c9
	I1212 00:36:55.243103  104530 docker.go:469] Stopping containers: [abde5ad85d4a 6960e84b00b8 55413175770e 56fd6254d6e1 b63a75f45416 19421dc21753 ecfcbd586321 9767a413586e 4ba778c674f0 19f9d76e8f1c fc27b8583502 a49117d4a4c8 5aa25d818283 ed0cff49857f 510b18b7b6d6 34ac7e63ee51 dc5d8378ca26 335bd2869121 10ca85c531dc dcead5249b2f c3360b039380 08edfeaa5cab 5c674269e2eb e80fc43dacae 547ce8660107 6fce6e649e1a 7db8deb95763 fef547bfcef9 afcf416fd476 d42aca9dd643 757215f5e48f f785241ab5c9]
	I1212 00:36:55.243180  104530 ssh_runner.go:195] Run: docker stop abde5ad85d4a 6960e84b00b8 55413175770e 56fd6254d6e1 b63a75f45416 19421dc21753 ecfcbd586321 9767a413586e 4ba778c674f0 19f9d76e8f1c fc27b8583502 a49117d4a4c8 5aa25d818283 ed0cff49857f 510b18b7b6d6 34ac7e63ee51 dc5d8378ca26 335bd2869121 10ca85c531dc dcead5249b2f c3360b039380 08edfeaa5cab 5c674269e2eb e80fc43dacae 547ce8660107 6fce6e649e1a 7db8deb95763 fef547bfcef9 afcf416fd476 d42aca9dd643 757215f5e48f f785241ab5c9
	I1212 00:36:55.267560  104530 command_runner.go:130] > abde5ad85d4a
	I1212 00:36:55.267589  104530 command_runner.go:130] > 6960e84b00b8
	I1212 00:36:55.267595  104530 command_runner.go:130] > 55413175770e
	I1212 00:36:55.267601  104530 command_runner.go:130] > 56fd6254d6e1
	I1212 00:36:55.267608  104530 command_runner.go:130] > b63a75f45416
	I1212 00:36:55.267613  104530 command_runner.go:130] > 19421dc21753
	I1212 00:36:55.267630  104530 command_runner.go:130] > ecfcbd586321
	I1212 00:36:55.267637  104530 command_runner.go:130] > 9767a413586e
	I1212 00:36:55.267643  104530 command_runner.go:130] > 4ba778c674f0
	I1212 00:36:55.267650  104530 command_runner.go:130] > 19f9d76e8f1c
	I1212 00:36:55.267656  104530 command_runner.go:130] > fc27b8583502
	I1212 00:36:55.267666  104530 command_runner.go:130] > a49117d4a4c8
	I1212 00:36:55.267672  104530 command_runner.go:130] > 5aa25d818283
	I1212 00:36:55.267679  104530 command_runner.go:130] > ed0cff49857f
	I1212 00:36:55.267707  104530 command_runner.go:130] > 510b18b7b6d6
	I1212 00:36:55.267723  104530 command_runner.go:130] > 34ac7e63ee51
	I1212 00:36:55.267729  104530 command_runner.go:130] > dc5d8378ca26
	I1212 00:36:55.267735  104530 command_runner.go:130] > 335bd2869121
	I1212 00:36:55.267742  104530 command_runner.go:130] > 10ca85c531dc
	I1212 00:36:55.267757  104530 command_runner.go:130] > dcead5249b2f
	I1212 00:36:55.267764  104530 command_runner.go:130] > c3360b039380
	I1212 00:36:55.267770  104530 command_runner.go:130] > 08edfeaa5cab
	I1212 00:36:55.267779  104530 command_runner.go:130] > 5c674269e2eb
	I1212 00:36:55.267785  104530 command_runner.go:130] > e80fc43dacae
	I1212 00:36:55.267798  104530 command_runner.go:130] > 547ce8660107
	I1212 00:36:55.267807  104530 command_runner.go:130] > 6fce6e649e1a
	I1212 00:36:55.267816  104530 command_runner.go:130] > 7db8deb95763
	I1212 00:36:55.267825  104530 command_runner.go:130] > fef547bfcef9
	I1212 00:36:55.267834  104530 command_runner.go:130] > afcf416fd476
	I1212 00:36:55.267843  104530 command_runner.go:130] > d42aca9dd643
	I1212 00:36:55.267852  104530 command_runner.go:130] > 757215f5e48f
	I1212 00:36:55.267861  104530 command_runner.go:130] > f785241ab5c9
	I1212 00:36:55.268959  104530 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:36:55.283176  104530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:36:55.291931  104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 00:36:55.291964  104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 00:36:55.291973  104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 00:36:55.291980  104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:36:55.292025  104530 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:36:55.292077  104530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:36:55.300972  104530 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 00:36:55.300994  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:55.409847  104530 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:36:55.410210  104530 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 00:36:55.410700  104530 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 00:36:55.411130  104530 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:36:55.411654  104530 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1212 00:36:55.412107  104530 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:36:55.413059  104530 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1212 00:36:55.413464  104530 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1212 00:36:55.413846  104530 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:36:55.414303  104530 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:36:55.414667  104530 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:36:55.416560  104530 command_runner.go:130] > [certs] Using the existing "sa" key
	I1212 00:36:55.416642  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:56.211128  104530 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:36:56.211154  104530 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:36:56.211167  104530 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:36:56.211176  104530 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:36:56.211190  104530 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:36:56.211225  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:56.277692  104530 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:36:56.278847  104530 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:36:56.278889  104530 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 00:36:56.393138  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:56.490674  104530 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:36:56.490707  104530 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:36:56.495141  104530 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:36:56.496969  104530 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:36:56.505734  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:56.568063  104530 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:36:56.574809  104530 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:36:56.574879  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:56.587806  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:57.100023  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:57.600145  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:58.099727  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:58.599716  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:59.099714  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:59.599934  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:00.099594  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:00.117319  104530 command_runner.go:130] > 1800
	I1212 00:37:00.117686  104530 api_server.go:72] duration metric: took 3.542880083s to wait for apiserver process to appear ...
	I1212 00:37:00.117709  104530 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:37:00.117727  104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:37:02.771626  104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:37:02.771661  104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:37:02.771677  104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:37:02.838010  104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:37:02.838048  104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:37:03.338843  104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:37:03.344825  104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 00:37:03.344863  104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 00:37:03.838231  104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:37:03.845511  104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 00:37:03.845548  104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 00:37:04.339177  104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:37:04.344349  104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 200:
	ok
	I1212 00:37:04.344445  104530 round_trippers.go:463] GET https://192.168.39.40:8443/version
	I1212 00:37:04.344456  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:04.344469  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:04.344482  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:04.352515  104530 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 00:37:04.352546  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:04.352557  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:04.352567  104530 round_trippers.go:580]     Content-Length: 264
	I1212 00:37:04.352575  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:04 GMT
	I1212 00:37:04.352584  104530 round_trippers.go:580]     Audit-Id: 63ee9643-66fd-4e1a-a212-0e71234e47a2
	I1212 00:37:04.352591  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:04.352598  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:04.352608  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:04.352649  104530 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 00:37:04.352786  104530 api_server.go:141] control plane version: v1.28.4
	I1212 00:37:04.352817  104530 api_server.go:131] duration metric: took 4.235100574s to wait for apiserver health ...
	I1212 00:37:04.352829  104530 cni.go:84] Creating CNI manager for ""
	I1212 00:37:04.352840  104530 cni.go:136] 2 nodes found, recommending kindnet
	I1212 00:37:04.355105  104530 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:37:04.356881  104530 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:37:04.363840  104530 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 00:37:04.363876  104530 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 00:37:04.363888  104530 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 00:37:04.363897  104530 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:37:04.363932  104530 command_runner.go:130] > Access: 2023-12-12 00:36:32.475977075 +0000
	I1212 00:37:04.363942  104530 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 00:37:04.363949  104530 command_runner.go:130] > Change: 2023-12-12 00:36:30.674977075 +0000
	I1212 00:37:04.363955  104530 command_runner.go:130] >  Birth: -
	I1212 00:37:04.364014  104530 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 00:37:04.364031  104530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:37:04.384536  104530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:37:05.836837  104530 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 00:37:05.848426  104530 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 00:37:05.852488  104530 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 00:37:05.879402  104530 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 00:37:05.888362  104530 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.503791012s)
	I1212 00:37:05.888392  104530 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:37:05.888502  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:05.888513  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:05.888524  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:05.888534  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:05.893619  104530 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:37:05.893657  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:05.893666  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:05.893674  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:05.893682  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:05.893690  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:05.893699  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:05 GMT
	I1212 00:37:05.893708  104530 round_trippers.go:580]     Audit-Id: 0f783734-4de0-49f4-945d-a630ecccf305
	I1212 00:37:05.895980  104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1199"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
	I1212 00:37:05.900061  104530 system_pods.go:59] 12 kube-system pods found
	I1212 00:37:05.900092  104530 system_pods.go:61] "coredns-5dd5756b68-t9jz8" [3605a003-e8d6-46b2-8fe7-f45647656622] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:37:05.900101  104530 system_pods.go:61] "etcd-multinode-859606" [7d6ae370-b910-4aef-8729-e141b307ae17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:37:05.900106  104530 system_pods.go:61] "kindnet-9slwc" [6b37daf7-e9d5-47c5-ae94-01150282b6cf] Running
	I1212 00:37:05.900109  104530 system_pods.go:61] "kindnet-d4q52" [35ed1c56-7487-4b6d-ab1f-b5cfe6502739] Running
	I1212 00:37:05.900116  104530 system_pods.go:61] "kindnet-x2g5d" [c1dab004-2557-4b4f-975b-bd0b5a8f4d90] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 00:37:05.900123  104530 system_pods.go:61] "kube-apiserver-multinode-859606" [0060efa7-dc06-439e-878f-b93b0e016326] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:37:05.900135  104530 system_pods.go:61] "kube-controller-manager-multinode-859606" [901bf3ab-f34d-42c8-b1da-d5431ae0219f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:37:05.900155  104530 system_pods.go:61] "kube-proxy-6f6zz" [d5931621-47fd-4f1a-bf46-813dd8352f00] Running
	I1212 00:37:05.900164  104530 system_pods.go:61] "kube-proxy-prf7f" [8238226c-3d01-4b91-963b-7360206b8615] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 00:37:05.900171  104530 system_pods.go:61] "kube-proxy-q9h26" [7dd12033-bf81-4cd3-a412-3fe3211dc87b] Running
	I1212 00:37:05.900176  104530 system_pods.go:61] "kube-scheduler-multinode-859606" [19a4264c-6ba5-44f4-8419-6f04d6224c92] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:37:05.900188  104530 system_pods.go:61] "storage-provisioner" [a021db21-b335-4c05-8e32-808642dbb72e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:37:05.900194  104530 system_pods.go:74] duration metric: took 11.796772ms to wait for pod list to return data ...
	I1212 00:37:05.900203  104530 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:37:05.900268  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes
	I1212 00:37:05.900277  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:05.900284  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:05.900293  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:05.902944  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:05.902977  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:05.902987  104530 round_trippers.go:580]     Audit-Id: 81b09a2b-85f5-497e-b79a-4f9569b9a2e7
	I1212 00:37:05.903000  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:05.903011  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:05.903018  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:05.903031  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:05.903044  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:05 GMT
	I1212 00:37:05.903213  104530 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1199"},"items":[{"metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10135 chars]
	I1212 00:37:05.903891  104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 00:37:05.903937  104530 node_conditions.go:123] node cpu capacity is 2
	I1212 00:37:05.903961  104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 00:37:05.903967  104530 node_conditions.go:123] node cpu capacity is 2
	I1212 00:37:05.903974  104530 node_conditions.go:105] duration metric: took 3.766372ms to run NodePressure ...
	I1212 00:37:05.903993  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:37:06.226936  104530 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 00:37:06.226983  104530 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 00:37:06.227046  104530 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 00:37:06.227181  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1212 00:37:06.227195  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.227207  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.227216  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.231116  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:06.231139  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.231148  104530 round_trippers.go:580]     Audit-Id: 69442a0f-0400-4b49-b627-328626316be1
	I1212 00:37:06.231157  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.231166  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.231175  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.231194  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.231203  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.231655  104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1202"},"items":[{"metadata":{"name":"etcd-multinode-859606","namespace":"kube-system","uid":"7d6ae370-b910-4aef-8729-e141b307ae17","resourceVersion":"1175","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.40:2379","kubernetes.io/config.hash":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.mirror":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.seen":"2023-12-12T00:30:03.645880014Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29766 chars]
	I1212 00:37:06.233034  104530 kubeadm.go:787] kubelet initialised
	I1212 00:37:06.233057  104530 kubeadm.go:788] duration metric: took 5.989168ms waiting for restarted kubelet to initialise ...
	I1212 00:37:06.233070  104530 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:37:06.233145  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:06.233158  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.233168  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.233176  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.237466  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:06.237487  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.237497  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.237506  104530 round_trippers.go:580]     Audit-Id: 39c8852d-e60c-4370-870d-ec951e0b6883
	I1212 00:37:06.237515  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.237528  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.237540  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.237548  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.238857  104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1202"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
	I1212 00:37:06.242660  104530 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:06.242743  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:06.242753  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.242767  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.242780  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.245902  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:06.245916  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.245922  104530 round_trippers.go:580]     Audit-Id: 992a9c9e-aaec-49ae-b76c-09a84a7382e6
	I1212 00:37:06.245937  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.245952  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.245967  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.245974  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.245983  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.246223  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:06.246613  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:06.246627  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.246633  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.246640  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.248752  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:06.248771  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.248780  104530 round_trippers.go:580]     Audit-Id: e035e5e3-4a98-439c-b13b-fca81955f3e3
	I1212 00:37:06.248788  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.248796  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.248805  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.248820  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.248828  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.249002  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:06.249315  104530 pod_ready.go:97] node "multinode-859606" hosting pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.249335  104530 pod_ready.go:81] duration metric: took 6.646085ms waiting for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:06.249343  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.249367  104530 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:06.249423  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-859606
	I1212 00:37:06.249431  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.249441  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.249459  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.251411  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:06.251431  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.251445  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.251453  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.251462  104530 round_trippers.go:580]     Audit-Id: 78646abe-5066-4ba6-8d95-ec6fa44a1ab7
	I1212 00:37:06.251469  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.251476  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.251486  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.251707  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-859606","namespace":"kube-system","uid":"7d6ae370-b910-4aef-8729-e141b307ae17","resourceVersion":"1175","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.40:2379","kubernetes.io/config.hash":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.mirror":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.seen":"2023-12-12T00:30:03.645880014Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6296 chars]
	I1212 00:37:06.252098  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:06.252112  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.252121  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.252127  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.254083  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:06.254103  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.254111  104530 round_trippers.go:580]     Audit-Id: 55b0d2ca-975d-4309-84a7-7cb9b1d8e361
	I1212 00:37:06.254120  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.254128  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.254136  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.254144  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.254152  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.254323  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:06.254602  104530 pod_ready.go:97] node "multinode-859606" hosting pod "etcd-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.254619  104530 pod_ready.go:81] duration metric: took 5.239063ms waiting for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:06.254626  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "etcd-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.254639  104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:06.254698  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-859606
	I1212 00:37:06.254708  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.254715  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.254727  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.256930  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:06.256949  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.256958  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.256967  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.256974  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.256983  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.256991  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.257005  104530 round_trippers.go:580]     Audit-Id: aa63f562-c9c3-453f-92e9-d6a4c4b3232f
	I1212 00:37:06.257170  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-859606","namespace":"kube-system","uid":"0060efa7-dc06-439e-878f-b93b0e016326","resourceVersion":"1177","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.40:8443","kubernetes.io/config.hash":"6579d881f0553848179768317ac84853","kubernetes.io/config.mirror":"6579d881f0553848179768317ac84853","kubernetes.io/config.seen":"2023-12-12T00:29:55.207817853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1212 00:37:06.257538  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:06.257552  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.257558  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.257564  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.259425  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:06.259445  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.259455  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.259463  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.259471  104530 round_trippers.go:580]     Audit-Id: 6b47a0d5-4136-488c-882b-b7fdd50344ce
	I1212 00:37:06.259479  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.259487  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.259495  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.259782  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:06.260081  104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-apiserver-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.260097  104530 pod_ready.go:81] duration metric: took 5.449955ms waiting for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:06.260103  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-apiserver-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.260113  104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:06.260178  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:06.260188  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.260196  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.260209  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.262963  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:06.262979  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.262988  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.262996  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.263012  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.263024  104530 round_trippers.go:580]     Audit-Id: eb54b9e3-39c5-4e0b-975b-d574f9443f33
	I1212 00:37:06.263034  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.263051  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.263697  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:06.289336  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:06.289371  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.289380  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.289385  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.292233  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:06.292251  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.292257  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.292263  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.292268  104530 round_trippers.go:580]     Audit-Id: 436076e3-8b39-45e2-80a6-f8f174ee0ea6
	I1212 00:37:06.292273  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.292280  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.292288  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.292641  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:06.293036  104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-controller-manager-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.293058  104530 pod_ready.go:81] duration metric: took 32.933264ms waiting for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:06.293071  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-controller-manager-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.293082  104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:06.489501  104530 request.go:629] Waited for 196.342403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6f6zz
	I1212 00:37:06.489581  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6f6zz
	I1212 00:37:06.489586  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.489598  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.489608  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.493034  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:06.493071  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.493081  104530 round_trippers.go:580]     Audit-Id: 0957bc6a-2f51-41b9-a929-11d0c801edd6
	I1212 00:37:06.493089  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.493098  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.493113  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.493126  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.493134  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.493829  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6f6zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d5931621-47fd-4f1a-bf46-813dd8352f00","resourceVersion":"1087","creationTimestamp":"2023-12-12T00:32:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:32:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I1212 00:37:06.688623  104530 request.go:629] Waited for 194.307311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m03
	I1212 00:37:06.688686  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m03
	I1212 00:37:06.688690  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.688698  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.688704  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.691344  104530 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1212 00:37:06.691361  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.691368  104530 round_trippers.go:580]     Audit-Id: 5d88fdfd-6f2f-44b1-a736-b6120a7e5a78
	I1212 00:37:06.691373  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.691390  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.691397  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.691405  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.691413  104530 round_trippers.go:580]     Content-Length: 210
	I1212 00:37:06.691425  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.691448  104530 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-859606-m03\" not found","reason":"NotFound","details":{"name":"multinode-859606-m03","kind":"nodes"},"code":404}
	I1212 00:37:06.691655  104530 pod_ready.go:97] node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
	I1212 00:37:06.691677  104530 pod_ready.go:81] duration metric: took 398.587524ms waiting for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:06.691686  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
	I1212 00:37:06.691693  104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:06.889174  104530 request.go:629] Waited for 197.369164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-prf7f
	I1212 00:37:06.889252  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-prf7f
	I1212 00:37:06.889259  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.889271  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.889280  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.893029  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:06.893047  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.893054  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.893093  104530 round_trippers.go:580]     Audit-Id: 6846aa1b-42ae-4d5d-a1c7-384d5728840b
	I1212 00:37:06.893108  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.893115  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.893120  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.893128  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.893282  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-prf7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"8238226c-3d01-4b91-963b-7360206b8615","resourceVersion":"1182","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5929 chars]
	I1212 00:37:07.089197  104530 request.go:629] Waited for 195.360283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:07.089292  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:07.089298  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:07.089316  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:07.089322  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:07.091891  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:07.091927  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:07.091939  104530 round_trippers.go:580]     Audit-Id: 1d65f568-2c4a-42d4-bbba-8be4bdc48dd6
	I1212 00:37:07.091948  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:07.091961  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:07.091970  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:07.091979  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:07.091990  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:07 GMT
	I1212 00:37:07.092224  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:07.092619  104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-proxy-prf7f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:07.092640  104530 pod_ready.go:81] duration metric: took 400.940457ms waiting for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:07.092649  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-proxy-prf7f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:07.092655  104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:07.289085  104530 request.go:629] Waited for 196.361677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
	I1212 00:37:07.289150  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
	I1212 00:37:07.289155  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:07.289165  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:07.289173  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:07.292103  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:07.292128  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:07.292139  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:07.292147  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:07 GMT
	I1212 00:37:07.292160  104530 round_trippers.go:580]     Audit-Id: 4abc3eb7-8c82-4d87-b6ea-4f96f5e08936
	I1212 00:37:07.292172  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:07.292182  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:07.292187  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:07.292410  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q9h26","generateName":"kube-proxy-","namespace":"kube-system","uid":"7dd12033-bf81-4cd3-a412-3fe3211dc87b","resourceVersion":"978","creationTimestamp":"2023-12-12T00:31:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:31:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1212 00:37:07.489267  104530 request.go:629] Waited for 196.338554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
	I1212 00:37:07.489349  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
	I1212 00:37:07.489362  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:07.489373  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:07.489380  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:07.491859  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:07.491887  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:07.491897  104530 round_trippers.go:580]     Audit-Id: a3f5d27d-a101-460d-9f23-04a20e185c6f
	I1212 00:37:07.491907  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:07.491930  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:07.491943  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:07.491952  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:07.491959  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:07 GMT
	I1212 00:37:07.492124  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606-m02","uid":"4dead465-c032-4274-8147-a5a7d38c1bf5","resourceVersion":"1083","creationTimestamp":"2023-12-12T00:34:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_35_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:34:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3805 chars]
	I1212 00:37:07.492453  104530 pod_ready.go:92] pod "kube-proxy-q9h26" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:07.492469  104530 pod_ready.go:81] duration metric: took 399.80822ms waiting for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:07.492483  104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:07.688932  104530 request.go:629] Waited for 196.377404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
	I1212 00:37:07.689024  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
	I1212 00:37:07.689047  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:07.689062  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:07.689086  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:07.692055  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:07.692076  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:07.692083  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:07.692088  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:07.692094  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:07.692101  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:07.692109  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:07 GMT
	I1212 00:37:07.692118  104530 round_trippers.go:580]     Audit-Id: 8c31c43b-819b-4283-9d9f-35f04a7e36e9
	I1212 00:37:07.692273  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-859606","namespace":"kube-system","uid":"19a4264c-6ba5-44f4-8419-6f04d6224c92","resourceVersion":"1173","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.mirror":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.seen":"2023-12-12T00:29:55.207819594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I1212 00:37:07.889054  104530 request.go:629] Waited for 196.353748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:07.889117  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:07.889125  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:07.889137  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:07.889151  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:07.892167  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:07.892188  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:07.892194  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:07.892200  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:07.892226  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:07.892241  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:07.892250  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:07 GMT
	I1212 00:37:07.892257  104530 round_trippers.go:580]     Audit-Id: 9ee0618c-b043-4e2b-9e76-9d15b5ac7dc7
	I1212 00:37:07.892403  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:07.892746  104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-scheduler-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:07.892773  104530 pod_ready.go:81] duration metric: took 400.280036ms waiting for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:07.892785  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-scheduler-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:07.892824  104530 pod_ready.go:38] duration metric: took 1.659742815s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:37:07.892857  104530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:37:07.904430  104530 command_runner.go:130] > -16
	I1212 00:37:07.904886  104530 ops.go:34] apiserver oom_adj: -16
	I1212 00:37:07.904899  104530 kubeadm.go:640] restartCluster took 22.709280238s
	I1212 00:37:07.904906  104530 kubeadm.go:406] StartCluster complete in 22.738318179s
	I1212 00:37:07.904921  104530 settings.go:142] acquiring lock: {Name:mk78e6f78084358f8434def169cefe6a62407a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:07.904985  104530 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:37:07.905654  104530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-80294/kubeconfig: {Name:mkf7cdfdedbee22114abcb4b16af22e84438f3f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:07.905860  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:37:07.906001  104530 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 00:37:07.909257  104530 out.go:177] * Enabled addons: 
	I1212 00:37:07.906240  104530 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:37:07.906246  104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:37:07.910860  104530 addons.go:502] enable addons completed in 4.865147ms: enabled=[]
	I1212 00:37:07.911128  104530 kapi.go:59] client config for multinode-859606: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.key", CAFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:37:07.911447  104530 round_trippers.go:463] GET https://192.168.39.40:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 00:37:07.911463  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:07.911471  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:07.911477  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:07.914264  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:07.914281  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:07.914291  104530 round_trippers.go:580]     Audit-Id: 48f5a121-1933-4a22-a355-5496f01879d3
	I1212 00:37:07.914299  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:07.914306  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:07.914317  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:07.914324  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:07.914335  104530 round_trippers.go:580]     Content-Length: 292
	I1212 00:37:07.914346  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:07 GMT
	I1212 00:37:07.914379  104530 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"75766566-fdf3-4c8a-abaa-ce458e02b129","resourceVersion":"1201","creationTimestamp":"2023-12-12T00:30:03Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 00:37:07.914516  104530 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-859606" context rescaled to 1 replicas
	I1212 00:37:07.914548  104530 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 00:37:07.917208  104530 out.go:177] * Verifying Kubernetes components...
	I1212 00:37:07.918721  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:37:08.110540  104530 command_runner.go:130] > apiVersion: v1
	I1212 00:37:08.110578  104530 command_runner.go:130] > data:
	I1212 00:37:08.110585  104530 command_runner.go:130] >   Corefile: |
	I1212 00:37:08.110591  104530 command_runner.go:130] >     .:53 {
	I1212 00:37:08.110596  104530 command_runner.go:130] >         log
	I1212 00:37:08.110602  104530 command_runner.go:130] >         errors
	I1212 00:37:08.110608  104530 command_runner.go:130] >         health {
	I1212 00:37:08.110614  104530 command_runner.go:130] >            lameduck 5s
	I1212 00:37:08.110620  104530 command_runner.go:130] >         }
	I1212 00:37:08.110627  104530 command_runner.go:130] >         ready
	I1212 00:37:08.110636  104530 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 00:37:08.110647  104530 command_runner.go:130] >            pods insecure
	I1212 00:37:08.110655  104530 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 00:37:08.110667  104530 command_runner.go:130] >            ttl 30
	I1212 00:37:08.110673  104530 command_runner.go:130] >         }
	I1212 00:37:08.110683  104530 command_runner.go:130] >         prometheus :9153
	I1212 00:37:08.110693  104530 command_runner.go:130] >         hosts {
	I1212 00:37:08.110705  104530 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1212 00:37:08.110714  104530 command_runner.go:130] >            fallthrough
	I1212 00:37:08.110724  104530 command_runner.go:130] >         }
	I1212 00:37:08.110732  104530 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 00:37:08.110737  104530 command_runner.go:130] >            max_concurrent 1000
	I1212 00:37:08.110743  104530 command_runner.go:130] >         }
	I1212 00:37:08.110748  104530 command_runner.go:130] >         cache 30
	I1212 00:37:08.110755  104530 command_runner.go:130] >         loop
	I1212 00:37:08.110761  104530 command_runner.go:130] >         reload
	I1212 00:37:08.110765  104530 command_runner.go:130] >         loadbalance
	I1212 00:37:08.110771  104530 command_runner.go:130] >     }
	I1212 00:37:08.110776  104530 command_runner.go:130] > kind: ConfigMap
	I1212 00:37:08.110782  104530 command_runner.go:130] > metadata:
	I1212 00:37:08.110787  104530 command_runner.go:130] >   creationTimestamp: "2023-12-12T00:30:03Z"
	I1212 00:37:08.110793  104530 command_runner.go:130] >   name: coredns
	I1212 00:37:08.110797  104530 command_runner.go:130] >   namespace: kube-system
	I1212 00:37:08.110804  104530 command_runner.go:130] >   resourceVersion: "407"
	I1212 00:37:08.110808  104530 command_runner.go:130] >   uid: 58df000b-e223-4f9f-a0ce-e6a345bc8b1e
	I1212 00:37:08.110871  104530 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 00:37:08.110910  104530 node_ready.go:35] waiting up to 6m0s for node "multinode-859606" to be "Ready" ...
	I1212 00:37:08.111108  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:08.111132  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:08.111144  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:08.111155  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:08.115592  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:08.115608  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:08.115615  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:08.115620  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:08 GMT
	I1212 00:37:08.115625  104530 round_trippers.go:580]     Audit-Id: 78e22458-8a23-48e3-9e27-578febb59a20
	I1212 00:37:08.115630  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:08.115635  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:08.115640  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:08.116255  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:08.289077  104530 request.go:629] Waited for 172.38964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:08.289150  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:08.289155  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:08.289163  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:08.289178  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:08.291767  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:08.291787  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:08.291797  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:08.291806  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:08 GMT
	I1212 00:37:08.291817  104530 round_trippers.go:580]     Audit-Id: bd808d02-17db-44e3-ae16-8f55b7323fe8
	I1212 00:37:08.291829  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:08.291841  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:08.291852  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:08.292123  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:08.793301  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:08.793331  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:08.793340  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:08.793346  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:08.796482  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:08.796514  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:08.796525  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:08.796533  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:08.796539  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:08.796544  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:08 GMT
	I1212 00:37:08.796549  104530 round_trippers.go:580]     Audit-Id: f551640f-6397-4f2f-ad7b-75e7a1ad4ab4
	I1212 00:37:08.796554  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:08.796722  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:09.293409  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:09.293442  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.293453  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.293461  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.296451  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:09.296469  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.296477  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.296482  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.296487  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.296496  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.296519  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.296527  104530 round_trippers.go:580]     Audit-Id: 2a8eef1a-1ec0-43cd-aba1-3dcd1603fa87
	I1212 00:37:09.296803  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:09.793597  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:09.793626  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.793645  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.793664  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.796604  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:09.796624  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.796631  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.796636  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.796644  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.796649  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.796654  104530 round_trippers.go:580]     Audit-Id: 022e877a-18b3-43f9-ab6d-dff649dfc9f8
	I1212 00:37:09.796659  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.796949  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:09.797279  104530 node_ready.go:49] node "multinode-859606" has status "Ready":"True"
	I1212 00:37:09.797303  104530 node_ready.go:38] duration metric: took 1.686360286s waiting for node "multinode-859606" to be "Ready" ...
	I1212 00:37:09.797315  104530 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:37:09.797375  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:09.797386  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.797396  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.797406  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.801844  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:09.801867  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.801876  104530 round_trippers.go:580]     Audit-Id: 420ea970-9f48-457c-b0f7-7ec9ec1a588e
	I1212 00:37:09.801885  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.801894  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.801904  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.801927  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.801938  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.803506  104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1216"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83879 chars]
	I1212 00:37:09.806061  104530 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:09.806150  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:09.806162  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.806174  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.806184  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.808345  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:09.808361  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.808374  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.808383  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.808397  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.808405  104530 round_trippers.go:580]     Audit-Id: 9a9463c1-b358-492e-b922-367c6104207c
	I1212 00:37:09.808413  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.808422  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.808706  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:09.809215  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:09.809231  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.809238  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.809244  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.811292  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:09.811307  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.811316  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.811323  104530 round_trippers.go:580]     Audit-Id: f5ebccd1-dc5e-4d64-b27a-f59d7a10b2c3
	I1212 00:37:09.811331  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.811346  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.811359  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.811367  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.811572  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:09.812037  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:09.812052  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.812059  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.812065  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.813996  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:09.814010  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.814019  104530 round_trippers.go:580]     Audit-Id: e587521b-4190-4251-9713-9fe4cfdc8df1
	I1212 00:37:09.814027  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.814034  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.814043  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.814054  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.814063  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.814382  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:09.889078  104530 request.go:629] Waited for 74.284522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:09.889133  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:09.889139  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.889148  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.889154  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.892171  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:09.892194  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.892203  104530 round_trippers.go:580]     Audit-Id: 6c0b5759-dcf0-429c-88bf-c342959f386c
	I1212 00:37:09.892229  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.892241  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.892250  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.892269  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.892283  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.892510  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:10.393716  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:10.393745  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:10.393755  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:10.393763  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:10.396859  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:10.396889  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:10.396899  104530 round_trippers.go:580]     Audit-Id: 5e8103b3-ec4e-4213-995d-24c751476571
	I1212 00:37:10.396907  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:10.396915  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:10.396923  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:10.396931  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:10.396939  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:10 GMT
	I1212 00:37:10.397178  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:10.397682  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:10.397698  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:10.397713  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:10.397722  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:10.399962  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:10.399981  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:10.399991  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:10 GMT
	I1212 00:37:10.399999  104530 round_trippers.go:580]     Audit-Id: 63def391-cbb3-428c-8bda-86f13b98f5c0
	I1212 00:37:10.400014  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:10.400026  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:10.400035  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:10.400046  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:10.400207  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:10.894000  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:10.894037  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:10.894048  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:10.894057  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:10.899308  104530 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:37:10.899334  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:10.899344  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:10.899355  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:10.899362  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:10.899369  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:10 GMT
	I1212 00:37:10.899377  104530 round_trippers.go:580]     Audit-Id: a6f54ff0-c318-428c-9e20-5afa1d44815f
	I1212 00:37:10.899383  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:10.899671  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:10.900196  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:10.900212  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:10.900219  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:10.900225  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:10.902531  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:10.902550  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:10.902560  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:10.902568  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:10.902576  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:10 GMT
	I1212 00:37:10.902586  104530 round_trippers.go:580]     Audit-Id: 72a3507b-3092-4d9e-bfa5-e84c0a5f5811
	I1212 00:37:10.902599  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:10.902610  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:10.902856  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:11.393521  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:11.393559  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:11.393569  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:11.393583  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:11.397962  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:11.398001  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:11.398012  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:11.398020  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:11.398028  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:11.398036  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:11 GMT
	I1212 00:37:11.398048  104530 round_trippers.go:580]     Audit-Id: 36163564-e6ac-4456-b495-9930bf8c7c95
	I1212 00:37:11.398056  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:11.399514  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:11.400077  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:11.400105  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:11.400115  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:11.400129  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:11.402841  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:11.402874  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:11.402895  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:11 GMT
	I1212 00:37:11.402903  104530 round_trippers.go:580]     Audit-Id: cf888e1f-3585-4d4c-b47a-d65c1b673f60
	I1212 00:37:11.402913  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:11.402923  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:11.402936  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:11.402944  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:11.403152  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:11.893890  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:11.893921  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:11.893930  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:11.893936  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:11.896885  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:11.896910  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:11.896920  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:11.896927  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:11.896934  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:11.896942  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:11.896949  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:11 GMT
	I1212 00:37:11.896956  104530 round_trippers.go:580]     Audit-Id: 560ccbf4-a93e-418b-97ef-b02d5b4a7c2a
	I1212 00:37:11.897291  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:11.897761  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:11.897778  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:11.897785  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:11.897791  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:11.900338  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:11.900381  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:11.900391  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:11 GMT
	I1212 00:37:11.900400  104530 round_trippers.go:580]     Audit-Id: 57fad163-7798-4518-b48a-afffca40ee66
	I1212 00:37:11.900408  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:11.900416  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:11.900428  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:11.900438  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:11.900617  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:11.900907  104530 pod_ready.go:102] pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace has status "Ready":"False"
	I1212 00:37:12.393289  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:12.393323  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:12.393337  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:12.393346  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:12.397658  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:12.397679  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:12.397686  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:12.397691  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:12 GMT
	I1212 00:37:12.397697  104530 round_trippers.go:580]     Audit-Id: 97d200a8-1144-4cfb-b7e7-ae622c67a09e
	I1212 00:37:12.397702  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:12.397707  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:12.397712  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:12.398001  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:12.398453  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:12.398468  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:12.398475  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:12.398480  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:12.401097  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:12.401115  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:12.401122  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:12 GMT
	I1212 00:37:12.401127  104530 round_trippers.go:580]     Audit-Id: 2a27c4e6-1e77-48fe-b9ff-18537a1ba771
	I1212 00:37:12.401135  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:12.401145  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:12.401153  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:12.401168  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:12.401283  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:12.893943  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:12.893969  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:12.893977  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:12.893984  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:12.897025  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:12.897047  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:12.897057  104530 round_trippers.go:580]     Audit-Id: 551ec886-a3c8-4be6-946b-459f81574f91
	I1212 00:37:12.897064  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:12.897071  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:12.897082  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:12.897091  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:12.897103  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:12 GMT
	I1212 00:37:12.897283  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:12.898253  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:12.898328  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:12.898343  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:12.898352  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:12.902125  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:12.902151  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:12.902161  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:12 GMT
	I1212 00:37:12.902171  104530 round_trippers.go:580]     Audit-Id: bb98bd7a-c04d-437d-aef6-72f5de2e6aac
	I1212 00:37:12.902182  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:12.902196  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:12.902214  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:12.902227  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:12.902594  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:13.393264  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:13.393294  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.393307  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.393317  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.396512  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:13.396534  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.396541  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.396546  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.396552  104530 round_trippers.go:580]     Audit-Id: 7f6212d1-aaf4-45df-a3b0-bb989bb1227a
	I1212 00:37:13.396560  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.396569  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.396578  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.396776  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:13.397248  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:13.397262  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.397270  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.397275  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.399404  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:13.399423  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.399433  104530 round_trippers.go:580]     Audit-Id: 77e44ea3-4125-4d4b-9450-f85475c1539a
	I1212 00:37:13.399440  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.399447  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.399454  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.399464  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.399471  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.399656  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:13.893292  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:13.893317  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.893325  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.893331  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.896458  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:13.896475  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.896487  104530 round_trippers.go:580]     Audit-Id: ac46caca-dc3e-4d98-bda6-e430bb1fa8ae
	I1212 00:37:13.896494  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.896512  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.896519  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.896526  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.896534  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.897107  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1231","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I1212 00:37:13.897587  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:13.897603  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.897613  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.897621  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.900547  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:13.900568  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.900578  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.900586  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.900595  104530 round_trippers.go:580]     Audit-Id: e3dbde9a-cc4a-4762-867f-d9e9a410aef1
	I1212 00:37:13.900603  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.900611  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.900643  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.900901  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:13.901209  104530 pod_ready.go:92] pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:13.901226  104530 pod_ready.go:81] duration metric: took 4.09514334s waiting for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:13.901265  104530 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:13.901326  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-859606
	I1212 00:37:13.901336  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.901346  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.901356  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.903529  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:13.903549  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.903558  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.903566  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.903574  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.903582  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.903590  104530 round_trippers.go:580]     Audit-Id: d34bc26a-3f02-4be9-9af2-1ad0fadfbfa3
	I1212 00:37:13.903596  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.903967  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-859606","namespace":"kube-system","uid":"7d6ae370-b910-4aef-8729-e141b307ae17","resourceVersion":"1218","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.40:2379","kubernetes.io/config.hash":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.mirror":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.seen":"2023-12-12T00:30:03.645880014Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6072 chars]
	I1212 00:37:13.904430  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:13.904447  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.904454  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.904460  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.906383  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:13.906404  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.906413  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.906420  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.906429  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.906444  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.906453  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.906466  104530 round_trippers.go:580]     Audit-Id: 3f37632a-0e9f-4887-b36f-43d17d2e4134
	I1212 00:37:13.906620  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:13.906989  104530 pod_ready.go:92] pod "etcd-multinode-859606" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:13.907016  104530 pod_ready.go:81] duration metric: took 5.741099ms waiting for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:13.907041  104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:13.907100  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-859606
	I1212 00:37:13.907110  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.907118  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.907125  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.909221  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:13.909237  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.909245  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.909253  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.909260  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.909267  104530 round_trippers.go:580]     Audit-Id: 10369159-e62c-4dd4-8d77-2e82a59d784d
	I1212 00:37:13.909275  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.909287  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.909569  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-859606","namespace":"kube-system","uid":"0060efa7-dc06-439e-878f-b93b0e016326","resourceVersion":"1216","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.40:8443","kubernetes.io/config.hash":"6579d881f0553848179768317ac84853","kubernetes.io/config.mirror":"6579d881f0553848179768317ac84853","kubernetes.io/config.seen":"2023-12-12T00:29:55.207817853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7607 chars]
	I1212 00:37:13.909929  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:13.909943  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.909953  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.909961  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.911781  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:13.911800  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.911808  104530 round_trippers.go:580]     Audit-Id: c9f36dd0-0f04-4274-9537-6c203e1b93b8
	I1212 00:37:13.911817  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.911825  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.911833  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.911841  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.911848  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.912152  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:13.912472  104530 pod_ready.go:92] pod "kube-apiserver-multinode-859606" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:13.912489  104530 pod_ready.go:81] duration metric: took 5.438494ms waiting for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:13.912497  104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:14.088914  104530 request.go:629] Waited for 176.352891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:14.089000  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:14.089007  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:14.089021  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:14.089037  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:14.092809  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:14.092835  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:14.092845  104530 round_trippers.go:580]     Audit-Id: 2c2f7c55-459e-4d01-a3f2-96b1b6cb8c8b
	I1212 00:37:14.092853  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:14.092861  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:14.092869  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:14.092876  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:14.092885  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:14 GMT
	I1212 00:37:14.093110  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:14.288948  104530 request.go:629] Waited for 195.377005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:14.289023  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:14.289032  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:14.289039  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:14.289053  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:14.291661  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:14.291688  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:14.291699  104530 round_trippers.go:580]     Audit-Id: 9a8ff279-becc-4981-a5d3-bab45d355f5b
	I1212 00:37:14.291709  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:14.291716  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:14.291721  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:14.291729  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:14.291734  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:14 GMT
	I1212 00:37:14.291936  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:14.489383  104530 request.go:629] Waited for 197.063929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:14.489461  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:14.489467  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:14.489475  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:14.489481  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:14.492357  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:14.492379  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:14.492386  104530 round_trippers.go:580]     Audit-Id: 12e5b7b5-fd32-4fe6-b1ff-eb7b4430f001
	I1212 00:37:14.492392  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:14.492397  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:14.492402  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:14.492407  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:14.492412  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:14 GMT
	I1212 00:37:14.492593  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:14.689101  104530 request.go:629] Waited for 196.091909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:14.689191  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:14.689198  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:14.689208  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:14.689218  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:14.691837  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:14.691858  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:14.691865  104530 round_trippers.go:580]     Audit-Id: 46cb3999-d30b-4074-ad3e-89d7533c5936
	I1212 00:37:14.691870  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:14.691875  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:14.691880  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:14.691885  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:14.691891  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:14 GMT
	I1212 00:37:14.692335  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:15.193200  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:15.193224  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:15.193232  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:15.193239  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:15.196981  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:15.197000  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:15.197006  104530 round_trippers.go:580]     Audit-Id: e9469ca3-765f-4b94-bad8-b62081cb2809
	I1212 00:37:15.197012  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:15.197034  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:15.197042  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:15.197049  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:15.197056  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:15 GMT
	I1212 00:37:15.197197  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:15.197635  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:15.197650  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:15.197657  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:15.197663  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:15.199909  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:15.199943  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:15.199952  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:15 GMT
	I1212 00:37:15.199959  104530 round_trippers.go:580]     Audit-Id: 55872ce3-0e31-4a29-bd8d-2fef53f7f5ad
	I1212 00:37:15.199967  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:15.199975  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:15.199983  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:15.199991  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:15.200167  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:15.693002  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:15.693027  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:15.693035  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:15.693041  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:15.695104  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:15.695127  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:15.695138  104530 round_trippers.go:580]     Audit-Id: e8dafcef-e232-4564-93ec-c99146d453a6
	I1212 00:37:15.695144  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:15.695152  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:15.695161  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:15.695170  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:15.695180  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:15 GMT
	I1212 00:37:15.695539  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:15.695954  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:15.695966  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:15.695974  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:15.695979  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:15.697613  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:15.697631  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:15.697640  104530 round_trippers.go:580]     Audit-Id: cd894f72-99d1-44a1-ba36-abb33011003a
	I1212 00:37:15.697649  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:15.697656  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:15.697661  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:15.697666  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:15.697671  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:15 GMT
	I1212 00:37:15.697922  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:16.193670  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:16.193698  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:16.193707  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:16.193712  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:16.196864  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:16.196891  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:16.196899  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:16.196904  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:16.196909  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:16 GMT
	I1212 00:37:16.196920  104530 round_trippers.go:580]     Audit-Id: 7e651bce-3845-4b66-8fb2-622327e8d40b
	I1212 00:37:16.196928  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:16.196936  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:16.197330  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:16.197766  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:16.197783  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:16.197790  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:16.197796  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:16.200198  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:16.200219  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:16.200225  104530 round_trippers.go:580]     Audit-Id: 9972e939-1cb4-4a78-8c0d-11a91b0625a8
	I1212 00:37:16.200230  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:16.200235  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:16.200241  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:16.200249  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:16.200254  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:16 GMT
	I1212 00:37:16.200367  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:16.200638  104530 pod_ready.go:102] pod "kube-controller-manager-multinode-859606" in "kube-system" namespace has status "Ready":"False"
	I1212 00:37:16.693040  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:16.693064  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:16.693073  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:16.693090  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:16.696324  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:16.696344  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:16.696354  104530 round_trippers.go:580]     Audit-Id: cfb4110b-a12c-4dd5-bb27-d5b38a9bdf99
	I1212 00:37:16.696363  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:16.696371  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:16.696380  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:16.696388  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:16.696393  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:16 GMT
	I1212 00:37:16.696757  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:16.697175  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:16.697186  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:16.697193  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:16.697199  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:16.699444  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:16.699466  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:16.699482  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:16.699489  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:16.699508  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:16.699514  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:16.699519  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:16 GMT
	I1212 00:37:16.699524  104530 round_trippers.go:580]     Audit-Id: 86f1d394-268f-4773-8a4f-65dfa15966b3
	I1212 00:37:16.699786  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:17.193535  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:17.193562  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:17.193571  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:17.193577  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:17.197001  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:17.197029  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:17.197039  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:17.197048  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:17.197056  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:17.197063  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:17 GMT
	I1212 00:37:17.197078  104530 round_trippers.go:580]     Audit-Id: 0039bd07-2809-441c-8a08-a005a1fb9474
	I1212 00:37:17.197086  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:17.197590  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:17.198195  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:17.198215  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:17.198227  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:17.198235  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:17.200561  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:17.200580  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:17.200594  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:17.200602  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:17.200608  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:17 GMT
	I1212 00:37:17.200615  104530 round_trippers.go:580]     Audit-Id: 7ca59026-3641-45f9-af2d-e56b2f15bbf4
	I1212 00:37:17.200623  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:17.200631  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:17.200818  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:17.693526  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:17.693559  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:17.693573  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:17.693581  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:17.696472  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:17.696503  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:17.696515  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:17 GMT
	I1212 00:37:17.696522  104530 round_trippers.go:580]     Audit-Id: f3b1cbfa-67ea-48ba-a602-3e51e26733e7
	I1212 00:37:17.696529  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:17.696537  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:17.696546  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:17.696556  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:17.696733  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:17.697203  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:17.697219  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:17.697230  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:17.697237  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:17.699246  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:17.699267  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:17.699274  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:17.699279  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:17.699284  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:17.699289  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:17.699303  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:17 GMT
	I1212 00:37:17.699311  104530 round_trippers.go:580]     Audit-Id: 537e896a-ad01-467d-8765-b18cc048639c
	I1212 00:37:17.699750  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:18.193513  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:18.193539  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:18.193547  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:18.193553  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:18.196642  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:18.196663  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:18.196670  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:18.196675  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:18.196680  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:18.196685  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:18.196690  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:18 GMT
	I1212 00:37:18.196695  104530 round_trippers.go:580]     Audit-Id: 01c5b2b7-3578-4302-9a5b-dbb75c34b269
	I1212 00:37:18.197211  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:18.197615  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:18.197626  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:18.197637  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:18.197645  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:18.199967  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:18.199986  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:18.199995  104530 round_trippers.go:580]     Audit-Id: b956bf4f-9b6c-4de6-87c0-84916a54c9aa
	I1212 00:37:18.200004  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:18.200012  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:18.200019  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:18.200027  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:18.200035  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:18 GMT
	I1212 00:37:18.200333  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:18.692979  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:18.693006  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:18.693014  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:18.693021  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:18.696863  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:18.696888  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:18.696895  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:18.696901  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:18 GMT
	I1212 00:37:18.696906  104530 round_trippers.go:580]     Audit-Id: d5c6e54d-aaea-4bf3-8a70-4dc0b57b264e
	I1212 00:37:18.696911  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:18.696916  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:18.696921  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:18.697946  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:18.698353  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:18.698366  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:18.698373  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:18.698381  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:18.700609  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:18.700629  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:18.700639  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:18.700647  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:18 GMT
	I1212 00:37:18.700655  104530 round_trippers.go:580]     Audit-Id: 0dde864e-ad38-4768-932a-24947963eeef
	I1212 00:37:18.700662  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:18.700669  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:18.700677  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:18.700840  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:18.701109  104530 pod_ready.go:102] pod "kube-controller-manager-multinode-859606" in "kube-system" namespace has status "Ready":"False"
	I1212 00:37:19.193617  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:19.193643  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.193652  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.193658  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.197048  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:19.197071  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.197078  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.197083  104530 round_trippers.go:580]     Audit-Id: 20502bbb-60e6-48d0-b283-2696575d955f
	I1212 00:37:19.197090  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.197095  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.197100  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.197106  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.197298  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1240","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I1212 00:37:19.197741  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:19.197753  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.197760  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.197766  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.199854  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:19.199879  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.199889  104530 round_trippers.go:580]     Audit-Id: d3c788eb-c748-41e7-8b78-70c1417d3584
	I1212 00:37:19.199898  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.199907  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.199932  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.199946  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.199954  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.200107  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:19.200426  104530 pod_ready.go:92] pod "kube-controller-manager-multinode-859606" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:19.200447  104530 pod_ready.go:81] duration metric: took 5.287942632s waiting for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.200463  104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.200518  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6f6zz
	I1212 00:37:19.200527  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.200538  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.200547  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.203112  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:19.203134  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.203143  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.203151  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.203159  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.203168  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.203177  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.203185  104530 round_trippers.go:580]     Audit-Id: d4bddcbb-39f6-4c08-83da-2d4523904cda
	I1212 00:37:19.203320  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6f6zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d5931621-47fd-4f1a-bf46-813dd8352f00","resourceVersion":"1087","creationTimestamp":"2023-12-12T00:32:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:32:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I1212 00:37:19.203874  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m03
	I1212 00:37:19.203896  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.203907  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.203928  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.206014  104530 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1212 00:37:19.206033  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.206049  104530 round_trippers.go:580]     Content-Length: 210
	I1212 00:37:19.206061  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.206068  104530 round_trippers.go:580]     Audit-Id: 4aef6f8a-43a6-4188-a386-e5e2d3a1f6f3
	I1212 00:37:19.206082  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.206089  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.206097  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.206105  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.206236  104530 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-859606-m03\" not found","reason":"NotFound","details":{"name":"multinode-859606-m03","kind":"nodes"},"code":404}
	I1212 00:37:19.206386  104530 pod_ready.go:97] node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
	I1212 00:37:19.206408  104530 pod_ready.go:81] duration metric: took 5.937337ms waiting for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:19.206423  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
	I1212 00:37:19.206431  104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.206494  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-prf7f
	I1212 00:37:19.206504  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.206515  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.206527  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.208365  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:19.208385  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.208394  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.208403  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.208418  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.208426  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.208437  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.208447  104530 round_trippers.go:580]     Audit-Id: c0033a2c-2985-4a9c-95d1-b824f5e20713
	I1212 00:37:19.208684  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-prf7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"8238226c-3d01-4b91-963b-7360206b8615","resourceVersion":"1206","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I1212 00:37:19.209132  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:19.209150  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.209164  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.209177  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.210970  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:19.210988  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.210997  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.211006  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.211020  104530 round_trippers.go:580]     Audit-Id: 396956f0-54b8-4778-ab7c-a37fe9b33b2e
	I1212 00:37:19.211027  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.211041  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.211052  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.211256  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:19.211606  104530 pod_ready.go:92] pod "kube-proxy-prf7f" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:19.211630  104530 pod_ready.go:81] duration metric: took 5.187099ms waiting for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.211641  104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.288985  104530 request.go:629] Waited for 77.268211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
	I1212 00:37:19.289047  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
	I1212 00:37:19.289060  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.289074  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.289085  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.291884  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:19.291923  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.291934  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.291943  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.291954  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.291962  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.291969  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.291984  104530 round_trippers.go:580]     Audit-Id: f9222a80-11b7-4070-b9c2-ea9633cc9696
	I1212 00:37:19.292162  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q9h26","generateName":"kube-proxy-","namespace":"kube-system","uid":"7dd12033-bf81-4cd3-a412-3fe3211dc87b","resourceVersion":"978","creationTimestamp":"2023-12-12T00:31:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:31:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1212 00:37:19.489027  104530 request.go:629] Waited for 196.400938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
	I1212 00:37:19.489092  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
	I1212 00:37:19.489097  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.489104  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.489111  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.492013  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:19.492033  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.492040  104530 round_trippers.go:580]     Audit-Id: 78f39b63-2309-4f9b-bec7-2fb901d235db
	I1212 00:37:19.492045  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.492051  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.492060  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.492069  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.492078  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.492270  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606-m02","uid":"4dead465-c032-4274-8147-a5a7d38c1bf5","resourceVersion":"1083","creationTimestamp":"2023-12-12T00:34:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_35_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:34:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3805 chars]
	I1212 00:37:19.492641  104530 pod_ready.go:92] pod "kube-proxy-q9h26" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:19.492662  104530 pod_ready.go:81] duration metric: took 281.010934ms waiting for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.492672  104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.688873  104530 request.go:629] Waited for 196.137127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
	I1212 00:37:19.688950  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
	I1212 00:37:19.688955  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.688963  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.688969  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.691734  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:19.691755  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.691762  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.691767  104530 round_trippers.go:580]     Audit-Id: f7675bf4-e31a-4738-b42f-be7859177fe3
	I1212 00:37:19.691772  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.691777  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.691783  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.691788  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.692171  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-859606","namespace":"kube-system","uid":"19a4264c-6ba5-44f4-8419-6f04d6224c92","resourceVersion":"1215","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.mirror":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.seen":"2023-12-12T00:29:55.207819594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I1212 00:37:19.888908  104530 request.go:629] Waited for 196.296036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:19.888977  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:19.888982  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.888989  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.888996  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.891677  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:19.891697  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.891704  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.891710  104530 round_trippers.go:580]     Audit-Id: 05fc06a3-8feb-45d4-9823-a6b2852345e9
	I1212 00:37:19.891723  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.891735  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.891745  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.891754  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.892212  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:19.892531  104530 pod_ready.go:92] pod "kube-scheduler-multinode-859606" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:19.892549  104530 pod_ready.go:81] duration metric: took 399.870057ms waiting for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.892566  104530 pod_ready.go:38] duration metric: took 10.095238343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:37:19.892585  104530 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:37:19.892637  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:19.905440  104530 command_runner.go:130] > 1800
	I1212 00:37:19.905932  104530 api_server.go:72] duration metric: took 11.991353984s to wait for apiserver process to appear ...
	I1212 00:37:19.905947  104530 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:37:19.905967  104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:37:19.912545  104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 200:
	ok
	I1212 00:37:19.912608  104530 round_trippers.go:463] GET https://192.168.39.40:8443/version
	I1212 00:37:19.912620  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.912630  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.912637  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.913604  104530 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 00:37:19.913622  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.913631  104530 round_trippers.go:580]     Audit-Id: a90e5deb-2922-43fe-bcfb-bbd1e68986eb
	I1212 00:37:19.913640  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.913655  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.913663  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.913674  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.913683  104530 round_trippers.go:580]     Content-Length: 264
	I1212 00:37:19.913691  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.913714  104530 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 00:37:19.913766  104530 api_server.go:141] control plane version: v1.28.4
	I1212 00:37:19.913784  104530 api_server.go:131] duration metric: took 7.830198ms to wait for apiserver health ...
	I1212 00:37:19.913794  104530 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:37:20.089251  104530 request.go:629] Waited for 175.374729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:20.089344  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:20.089351  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:20.089363  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:20.089370  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:20.093974  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:20.094001  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:20.094009  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:20.094016  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:20 GMT
	I1212 00:37:20.094024  104530 round_trippers.go:580]     Audit-Id: a00499e6-5aa6-4108-b030-bb102abafbdd
	I1212 00:37:20.094032  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:20.094055  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:20.094065  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:20.095252  104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1231","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83341 chars]
	I1212 00:37:20.098784  104530 system_pods.go:59] 12 kube-system pods found
	I1212 00:37:20.098809  104530 system_pods.go:61] "coredns-5dd5756b68-t9jz8" [3605a003-e8d6-46b2-8fe7-f45647656622] Running
	I1212 00:37:20.098814  104530 system_pods.go:61] "etcd-multinode-859606" [7d6ae370-b910-4aef-8729-e141b307ae17] Running
	I1212 00:37:20.098820  104530 system_pods.go:61] "kindnet-9slwc" [6b37daf7-e9d5-47c5-ae94-01150282b6cf] Running
	I1212 00:37:20.098826  104530 system_pods.go:61] "kindnet-d4q52" [35ed1c56-7487-4b6d-ab1f-b5cfe6502739] Running
	I1212 00:37:20.098832  104530 system_pods.go:61] "kindnet-x2g5d" [c1dab004-2557-4b4f-975b-bd0b5a8f4d90] Running
	I1212 00:37:20.098839  104530 system_pods.go:61] "kube-apiserver-multinode-859606" [0060efa7-dc06-439e-878f-b93b0e016326] Running
	I1212 00:37:20.098853  104530 system_pods.go:61] "kube-controller-manager-multinode-859606" [901bf3ab-f34d-42c8-b1da-d5431ae0219f] Running
	I1212 00:37:20.098864  104530 system_pods.go:61] "kube-proxy-6f6zz" [d5931621-47fd-4f1a-bf46-813dd8352f00] Running
	I1212 00:37:20.098870  104530 system_pods.go:61] "kube-proxy-prf7f" [8238226c-3d01-4b91-963b-7360206b8615] Running
	I1212 00:37:20.098877  104530 system_pods.go:61] "kube-proxy-q9h26" [7dd12033-bf81-4cd3-a412-3fe3211dc87b] Running
	I1212 00:37:20.098887  104530 system_pods.go:61] "kube-scheduler-multinode-859606" [19a4264c-6ba5-44f4-8419-6f04d6224c92] Running
	I1212 00:37:20.098896  104530 system_pods.go:61] "storage-provisioner" [a021db21-b335-4c05-8e32-808642dbb72e] Running
	I1212 00:37:20.098906  104530 system_pods.go:74] duration metric: took 185.102197ms to wait for pod list to return data ...
	I1212 00:37:20.098917  104530 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:37:20.289369  104530 request.go:629] Waited for 190.371344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:37:20.289426  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:37:20.289431  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:20.289439  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:20.289445  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:20.292334  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:20.292356  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:20.292380  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:20.292392  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:20.292406  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:20.292429  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:20.292440  104530 round_trippers.go:580]     Content-Length: 262
	I1212 00:37:20.292445  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:20 GMT
	I1212 00:37:20.292452  104530 round_trippers.go:580]     Audit-Id: fcc27580-a669-4f4d-a44c-e2fc099e94e8
	I1212 00:37:20.292478  104530 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b7226be9-2d9e-41aa-a29f-25b2631acf72","resourceVersion":"337","creationTimestamp":"2023-12-12T00:30:16Z"}}]}
	I1212 00:37:20.292693  104530 default_sa.go:45] found service account: "default"
	I1212 00:37:20.292714  104530 default_sa.go:55] duration metric: took 193.787623ms for default service account to be created ...
	I1212 00:37:20.292723  104530 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:37:20.489190  104530 request.go:629] Waited for 196.390334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:20.489259  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:20.489264  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:20.489281  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:20.489299  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:20.493457  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:20.493482  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:20.493501  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:20.493511  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:20.493519  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:20.493534  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:20.493541  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:20 GMT
	I1212 00:37:20.493545  104530 round_trippers.go:580]     Audit-Id: b5e27102-8247-4af2-81d0-d5c782e978b9
	I1212 00:37:20.495018  104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1231","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83341 chars]
	I1212 00:37:20.497464  104530 system_pods.go:86] 12 kube-system pods found
	I1212 00:37:20.497487  104530 system_pods.go:89] "coredns-5dd5756b68-t9jz8" [3605a003-e8d6-46b2-8fe7-f45647656622] Running
	I1212 00:37:20.497492  104530 system_pods.go:89] "etcd-multinode-859606" [7d6ae370-b910-4aef-8729-e141b307ae17] Running
	I1212 00:37:20.497498  104530 system_pods.go:89] "kindnet-9slwc" [6b37daf7-e9d5-47c5-ae94-01150282b6cf] Running
	I1212 00:37:20.497505  104530 system_pods.go:89] "kindnet-d4q52" [35ed1c56-7487-4b6d-ab1f-b5cfe6502739] Running
	I1212 00:37:20.497520  104530 system_pods.go:89] "kindnet-x2g5d" [c1dab004-2557-4b4f-975b-bd0b5a8f4d90] Running
	I1212 00:37:20.497528  104530 system_pods.go:89] "kube-apiserver-multinode-859606" [0060efa7-dc06-439e-878f-b93b0e016326] Running
	I1212 00:37:20.497543  104530 system_pods.go:89] "kube-controller-manager-multinode-859606" [901bf3ab-f34d-42c8-b1da-d5431ae0219f] Running
	I1212 00:37:20.497550  104530 system_pods.go:89] "kube-proxy-6f6zz" [d5931621-47fd-4f1a-bf46-813dd8352f00] Running
	I1212 00:37:20.497554  104530 system_pods.go:89] "kube-proxy-prf7f" [8238226c-3d01-4b91-963b-7360206b8615] Running
	I1212 00:37:20.497560  104530 system_pods.go:89] "kube-proxy-q9h26" [7dd12033-bf81-4cd3-a412-3fe3211dc87b] Running
	I1212 00:37:20.497565  104530 system_pods.go:89] "kube-scheduler-multinode-859606" [19a4264c-6ba5-44f4-8419-6f04d6224c92] Running
	I1212 00:37:20.497571  104530 system_pods.go:89] "storage-provisioner" [a021db21-b335-4c05-8e32-808642dbb72e] Running
	I1212 00:37:20.497579  104530 system_pods.go:126] duration metric: took 204.845476ms to wait for k8s-apps to be running ...
	I1212 00:37:20.497589  104530 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:37:20.497645  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:37:20.514001  104530 system_svc.go:56] duration metric: took 16.405003ms WaitForService to wait for kubelet.
	I1212 00:37:20.514018  104530 kubeadm.go:581] duration metric: took 12.599444535s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 00:37:20.514036  104530 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:37:20.689493  104530 request.go:629] Waited for 175.357994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes
	I1212 00:37:20.689560  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes
	I1212 00:37:20.689567  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:20.689580  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:20.689590  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:20.692705  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:20.692723  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:20.692730  104530 round_trippers.go:580]     Audit-Id: 1464068b-baf2-48bc-ba66-087651c82097
	I1212 00:37:20.692735  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:20.692740  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:20.692752  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:20.692766  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:20.692774  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:20 GMT
	I1212 00:37:20.693088  104530 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10008 chars]
	I1212 00:37:20.693685  104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 00:37:20.693709  104530 node_conditions.go:123] node cpu capacity is 2
	I1212 00:37:20.693723  104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 00:37:20.693735  104530 node_conditions.go:123] node cpu capacity is 2
	I1212 00:37:20.693741  104530 node_conditions.go:105] duration metric: took 179.70085ms to run NodePressure ...
	I1212 00:37:20.693757  104530 start.go:228] waiting for startup goroutines ...
	I1212 00:37:20.693768  104530 start.go:233] waiting for cluster config update ...
	I1212 00:37:20.693780  104530 start.go:242] writing updated cluster config ...
	I1212 00:37:20.694346  104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:37:20.694464  104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
	I1212 00:37:20.697216  104530 out.go:177] * Starting worker node multinode-859606-m02 in cluster multinode-859606
	I1212 00:37:20.698351  104530 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 00:37:20.698370  104530 cache.go:56] Caching tarball of preloaded images
	I1212 00:37:20.698473  104530 preload.go:174] Found /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 00:37:20.698483  104530 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 00:37:20.698567  104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
	I1212 00:37:20.698742  104530 start.go:365] acquiring machines lock for multinode-859606-m02: {Name:mk381e91746c2e5b8a4620fe3fd447d80375e413 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:37:20.698785  104530 start.go:369] acquired machines lock for "multinode-859606-m02" in 25.605µs
	I1212 00:37:20.698798  104530 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:37:20.698805  104530 fix.go:54] fixHost starting: m02
	I1212 00:37:20.699049  104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:37:20.699070  104530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:37:20.713769  104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39383
	I1212 00:37:20.714173  104530 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:37:20.714616  104530 main.go:141] libmachine: Using API Version  1
	I1212 00:37:20.714644  104530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:37:20.714957  104530 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:37:20.715148  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:20.715321  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetState
	I1212 00:37:20.716762  104530 fix.go:102] recreateIfNeeded on multinode-859606-m02: state=Stopped err=<nil>
	I1212 00:37:20.716788  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	W1212 00:37:20.716969  104530 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 00:37:20.718972  104530 out.go:177] * Restarting existing kvm2 VM for "multinode-859606-m02" ...
	I1212 00:37:20.720351  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .Start
	I1212 00:37:20.720531  104530 main.go:141] libmachine: (multinode-859606-m02) Ensuring networks are active...
	I1212 00:37:20.721224  104530 main.go:141] libmachine: (multinode-859606-m02) Ensuring network default is active
	I1212 00:37:20.721668  104530 main.go:141] libmachine: (multinode-859606-m02) Ensuring network mk-multinode-859606 is active
	I1212 00:37:20.722168  104530 main.go:141] libmachine: (multinode-859606-m02) Getting domain xml...
	I1212 00:37:20.722963  104530 main.go:141] libmachine: (multinode-859606-m02) Creating domain...
	I1212 00:37:21.957474  104530 main.go:141] libmachine: (multinode-859606-m02) Waiting to get IP...
	I1212 00:37:21.958335  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:21.958740  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:21.958796  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:21.958699  104802 retry.go:31] will retry after 282.895442ms: waiting for machine to come up
	I1212 00:37:22.243280  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:22.243745  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:22.243773  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:22.243699  104802 retry.go:31] will retry after 387.587998ms: waiting for machine to come up
	I1212 00:37:22.633350  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:22.633841  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:22.633875  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:22.633770  104802 retry.go:31] will retry after 299.810803ms: waiting for machine to come up
	I1212 00:37:22.935179  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:22.935627  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:22.935662  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:22.935567  104802 retry.go:31] will retry after 368.460834ms: waiting for machine to come up
	I1212 00:37:23.306050  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:23.306531  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:23.306554  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:23.306486  104802 retry.go:31] will retry after 567.761569ms: waiting for machine to come up
	I1212 00:37:23.876187  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:23.876658  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:23.876692  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:23.876603  104802 retry.go:31] will retry after 673.685642ms: waiting for machine to come up
	I1212 00:37:24.551471  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:24.551879  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:24.551932  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:24.551825  104802 retry.go:31] will retry after 837.913991ms: waiting for machine to come up
	I1212 00:37:25.391781  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:25.392075  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:25.392106  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:25.392038  104802 retry.go:31] will retry after 1.006695939s: waiting for machine to come up
	I1212 00:37:26.400658  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:26.401136  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:26.401168  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:26.401063  104802 retry.go:31] will retry after 1.662996951s: waiting for machine to come up
	I1212 00:37:28.065937  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:28.066407  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:28.066429  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:28.066363  104802 retry.go:31] will retry after 2.272536479s: waiting for machine to come up
	I1212 00:37:30.341875  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:30.342336  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:30.342380  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:30.342274  104802 retry.go:31] will retry after 1.895134507s: waiting for machine to come up
	I1212 00:37:32.239315  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:32.239701  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:32.239736  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:32.239637  104802 retry.go:31] will retry after 2.566822425s: waiting for machine to come up
	I1212 00:37:34.808939  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:34.809382  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:34.809406  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:34.809339  104802 retry.go:31] will retry after 4.439419543s: waiting for machine to come up
	I1212 00:37:39.249907  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.250290  104530 main.go:141] libmachine: (multinode-859606-m02) Found IP for machine: 192.168.39.65
	I1212 00:37:39.250320  104530 main.go:141] libmachine: (multinode-859606-m02) Reserving static IP address...
	I1212 00:37:39.250342  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has current primary IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.250818  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "multinode-859606-m02", mac: "52:54:00:ea:e9:13", ip: "192.168.39.65"} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.250858  104530 main.go:141] libmachine: (multinode-859606-m02) Reserved static IP address: 192.168.39.65
	I1212 00:37:39.250878  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | skip adding static IP to network mk-multinode-859606 - found existing host DHCP lease matching {name: "multinode-859606-m02", mac: "52:54:00:ea:e9:13", ip: "192.168.39.65"}
	I1212 00:37:39.250889  104530 main.go:141] libmachine: (multinode-859606-m02) Waiting for SSH to be available...
	I1212 00:37:39.250909  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | Getting to WaitForSSH function...
	I1212 00:37:39.253228  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.253705  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.253733  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.253879  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | Using SSH client type: external
	I1212 00:37:39.253906  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa (-rw-------)
	I1212 00:37:39.253933  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:37:39.253947  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | About to run SSH command:
	I1212 00:37:39.253968  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | exit 0
	I1212 00:37:39.347723  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | SSH cmd err, output: <nil>: 
	I1212 00:37:39.348137  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetConfigRaw
	I1212 00:37:39.348792  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
	I1212 00:37:39.351240  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.351592  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.351628  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.351860  104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
	I1212 00:37:39.352092  104530 machine.go:88] provisioning docker machine ...
	I1212 00:37:39.352113  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:39.352303  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetMachineName
	I1212 00:37:39.352445  104530 buildroot.go:166] provisioning hostname "multinode-859606-m02"
	I1212 00:37:39.352470  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetMachineName
	I1212 00:37:39.352609  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:39.354957  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.355309  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.355339  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.355537  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:39.355716  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.355867  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.355992  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:39.356149  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:37:39.356637  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 00:37:39.356656  104530 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-859606-m02 && echo "multinode-859606-m02" | sudo tee /etc/hostname
	I1212 00:37:39.502532  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-859606-m02
	
	I1212 00:37:39.502568  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:39.505328  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.505789  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.505823  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.505999  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:39.506231  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.506373  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.506531  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:39.506708  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:37:39.507067  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 00:37:39.507085  104530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-859606-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-859606-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-859606-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:37:39.645009  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:37:39.645036  104530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17764-80294/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-80294/.minikube}
	I1212 00:37:39.645051  104530 buildroot.go:174] setting up certificates
	I1212 00:37:39.645059  104530 provision.go:83] configureAuth start
	I1212 00:37:39.645068  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetMachineName
	I1212 00:37:39.645319  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
	I1212 00:37:39.648244  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.648695  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.648726  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.648891  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:39.651280  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.651603  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.651634  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.651775  104530 provision.go:138] copyHostCerts
	I1212 00:37:39.651810  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
	I1212 00:37:39.651849  104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem, removing ...
	I1212 00:37:39.651862  104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
	I1212 00:37:39.651958  104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem (1078 bytes)
	I1212 00:37:39.652055  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
	I1212 00:37:39.652080  104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem, removing ...
	I1212 00:37:39.652087  104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
	I1212 00:37:39.652126  104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem (1123 bytes)
	I1212 00:37:39.652240  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
	I1212 00:37:39.652270  104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem, removing ...
	I1212 00:37:39.652278  104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
	I1212 00:37:39.652320  104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem (1679 bytes)
	I1212 00:37:39.652413  104530 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem org=jenkins.multinode-859606-m02 san=[192.168.39.65 192.168.39.65 localhost 127.0.0.1 minikube multinode-859606-m02]
	I1212 00:37:39.786080  104530 provision.go:172] copyRemoteCerts
	I1212 00:37:39.786162  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:37:39.786193  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:39.788840  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.789107  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.789147  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.789364  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:39.789559  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.789730  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:39.789868  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
	I1212 00:37:39.884832  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:37:39.884920  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:37:39.908744  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:37:39.908817  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 00:37:39.932380  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:37:39.932446  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:37:39.956816  104530 provision.go:86] duration metric: configureAuth took 311.743914ms
	I1212 00:37:39.956853  104530 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:37:39.957091  104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:37:39.957118  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:39.957389  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:39.960094  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.960494  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.960529  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.960669  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:39.960847  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.961048  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.961181  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:39.961346  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:37:39.961722  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 00:37:39.961740  104530 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 00:37:40.093977  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 00:37:40.094012  104530 buildroot.go:70] root file system type: tmpfs
	I1212 00:37:40.094174  104530 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 00:37:40.094208  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:40.097149  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:40.097507  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:40.097534  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:40.097760  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:40.098013  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:40.098210  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:40.098318  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:40.098507  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:37:40.098848  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 00:37:40.098916  104530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.40"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 00:37:40.241326  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.40
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 00:37:40.241355  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:40.243925  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:40.244271  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:40.244296  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:40.244504  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:40.244693  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:40.244875  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:40.245023  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:40.245173  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:37:40.245547  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 00:37:40.245565  104530 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 00:37:41.126250  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 00:37:41.126280  104530 machine.go:91] provisioned docker machine in 1.774172725s
	I1212 00:37:41.126296  104530 start.go:300] post-start starting for "multinode-859606-m02" (driver="kvm2")
	I1212 00:37:41.126310  104530 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:37:41.126329  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:41.126679  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:37:41.126707  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:41.129504  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.129833  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:41.129866  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.130073  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:41.130301  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:41.130478  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:41.130687  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
	I1212 00:37:41.225898  104530 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:37:41.230065  104530 command_runner.go:130] > NAME=Buildroot
	I1212 00:37:41.230089  104530 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 00:37:41.230096  104530 command_runner.go:130] > ID=buildroot
	I1212 00:37:41.230109  104530 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 00:37:41.230117  104530 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 00:37:41.230251  104530 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 00:37:41.230275  104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/addons for local assets ...
	I1212 00:37:41.230351  104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/files for local assets ...
	I1212 00:37:41.230452  104530 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> 876092.pem in /etc/ssl/certs
	I1212 00:37:41.230466  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> /etc/ssl/certs/876092.pem
	I1212 00:37:41.230586  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:37:41.239133  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /etc/ssl/certs/876092.pem (1708 bytes)
	I1212 00:37:41.262487  104530 start.go:303] post-start completed in 136.174154ms
	I1212 00:37:41.262513  104530 fix.go:56] fixHost completed within 20.563707335s
	I1212 00:37:41.262539  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:41.265240  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.265538  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:41.265572  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.265778  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:41.265950  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:41.266126  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:41.266310  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:41.266489  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:37:41.266856  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 00:37:41.266871  104530 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:37:41.396610  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702341461.344204788
	
	I1212 00:37:41.396638  104530 fix.go:206] guest clock: 1702341461.344204788
	I1212 00:37:41.396649  104530 fix.go:219] Guest: 2023-12-12 00:37:41.344204788 +0000 UTC Remote: 2023-12-12 00:37:41.262521516 +0000 UTC m=+81.745766897 (delta=81.683272ms)
	I1212 00:37:41.396669  104530 fix.go:190] guest clock delta is within tolerance: 81.683272ms
	I1212 00:37:41.396676  104530 start.go:83] releasing machines lock for "multinode-859606-m02", held for 20.697881438s
	I1212 00:37:41.396707  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:41.396998  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
	I1212 00:37:41.399794  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.400251  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:41.400284  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.402301  104530 out.go:177] * Found network options:
	I1212 00:37:41.403745  104530 out.go:177]   - NO_PROXY=192.168.39.40
	W1212 00:37:41.404991  104530 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:37:41.405014  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:41.405584  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:41.405757  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:41.405832  104530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:37:41.405875  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	W1212 00:37:41.405953  104530 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:37:41.406034  104530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:37:41.406061  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:41.408298  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.408470  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.408704  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:41.408734  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.408860  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:41.408890  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:41.408931  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.409042  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:41.409170  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:41.409276  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:41.409448  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:41.409487  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:41.409614  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
	I1212 00:37:41.409611  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
	I1212 00:37:41.504163  104530 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:37:41.504453  104530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:37:41.504528  104530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:37:41.528894  104530 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:37:41.528955  104530 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 00:37:41.529013  104530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:37:41.529030  104530 start.go:475] detecting cgroup driver to use...
	I1212 00:37:41.529132  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:37:41.549871  104530 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 00:37:41.549952  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 00:37:41.559926  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:37:41.569604  104530 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:37:41.569669  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:37:41.578872  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:37:41.588052  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:37:41.597753  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:37:41.607940  104530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:37:41.618063  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:37:41.628111  104530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:37:41.637202  104530 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:37:41.637321  104530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:37:41.645675  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:37:41.756330  104530 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:37:41.774116  104530 start.go:475] detecting cgroup driver to use...
	I1212 00:37:41.774203  104530 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 00:37:41.790254  104530 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 00:37:41.790292  104530 command_runner.go:130] > [Unit]
	I1212 00:37:41.790304  104530 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 00:37:41.790313  104530 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 00:37:41.790321  104530 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 00:37:41.790329  104530 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 00:37:41.790357  104530 command_runner.go:130] > StartLimitBurst=3
	I1212 00:37:41.790372  104530 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 00:37:41.790377  104530 command_runner.go:130] > [Service]
	I1212 00:37:41.790387  104530 command_runner.go:130] > Type=notify
	I1212 00:37:41.790391  104530 command_runner.go:130] > Restart=on-failure
	I1212 00:37:41.790396  104530 command_runner.go:130] > Environment=NO_PROXY=192.168.39.40
	I1212 00:37:41.790406  104530 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 00:37:41.790421  104530 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 00:37:41.790437  104530 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 00:37:41.790453  104530 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 00:37:41.790463  104530 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 00:37:41.790474  104530 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 00:37:41.790485  104530 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 00:37:41.790548  104530 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 00:37:41.790571  104530 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 00:37:41.790578  104530 command_runner.go:130] > ExecStart=
	I1212 00:37:41.790612  104530 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1212 00:37:41.790624  104530 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 00:37:41.790640  104530 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 00:37:41.790650  104530 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 00:37:41.790654  104530 command_runner.go:130] > LimitNOFILE=infinity
	I1212 00:37:41.790662  104530 command_runner.go:130] > LimitNPROC=infinity
	I1212 00:37:41.790671  104530 command_runner.go:130] > LimitCORE=infinity
	I1212 00:37:41.790681  104530 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 00:37:41.790693  104530 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 00:37:41.790703  104530 command_runner.go:130] > TasksMax=infinity
	I1212 00:37:41.790718  104530 command_runner.go:130] > TimeoutStartSec=0
	I1212 00:37:41.790729  104530 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 00:37:41.790740  104530 command_runner.go:130] > Delegate=yes
	I1212 00:37:41.790749  104530 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 00:37:41.790764  104530 command_runner.go:130] > KillMode=process
	I1212 00:37:41.790774  104530 command_runner.go:130] > [Install]
	I1212 00:37:41.790781  104530 command_runner.go:130] > WantedBy=multi-user.target
	I1212 00:37:41.790852  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:37:41.807010  104530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:37:41.831315  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:37:41.843702  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:37:41.855452  104530 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:37:41.887392  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:37:41.900115  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:37:41.917122  104530 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 00:37:41.917212  104530 ssh_runner.go:195] Run: which cri-dockerd
	I1212 00:37:41.920948  104530 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 00:37:41.921049  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 00:37:41.929638  104530 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 00:37:41.945850  104530 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 00:37:42.053680  104530 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 00:37:42.164852  104530 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 00:37:42.164906  104530 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 00:37:42.181956  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:37:42.292269  104530 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 00:37:43.762922  104530 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.47061306s)
	I1212 00:37:43.762999  104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 00:37:43.866143  104530 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 00:37:43.974469  104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 00:37:44.089805  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:37:44.189760  104530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 00:37:44.203372  104530 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1212 00:37:44.203469  104530 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1212 00:37:44.213697  104530 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 00:37:32 UTC, ends at Tue 2023-12-12 00:37:44 UTC. --
	I1212 00:37:44.213720  104530 command_runner.go:130] > Dec 12 00:37:33 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1212 00:37:44.213727  104530 command_runner.go:130] > Dec 12 00:37:33 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 00:37:44.213734  104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: cri-docker.socket: Succeeded.
	I1212 00:37:44.213740  104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Closed CRI Docker Socket for the API.
	I1212 00:37:44.213747  104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 00:37:44.213755  104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1212 00:37:44.213761  104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 00:37:44.213770  104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: cri-docker.socket: Succeeded.
	I1212 00:37:44.213778  104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Closed CRI Docker Socket for the API.
	I1212 00:37:44.213786  104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 00:37:44.213794  104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1212 00:37:44.213801  104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 00:37:44.213814  104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
	I1212 00:37:44.213828  104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
	I1212 00:37:44.213842  104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 00:37:44.213860  104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Starting CRI Docker Socket for the API.
	I1212 00:37:44.213874  104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 00:37:44.213887  104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
	I1212 00:37:44.213899  104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
	I1212 00:37:44.213913  104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 00:37:44.213929  104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	I1212 00:37:44.213946  104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	I1212 00:37:44.216418  104530 out.go:177] 
	W1212 00:37:44.218157  104530 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 00:37:32 UTC, ends at Tue 2023-12-12 00:37:44 UTC. --
	Dec 12 00:37:33 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:37:33 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:37:36 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:37:36 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:37:36 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:37:36 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:37:36 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:37:38 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:37:38 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:37:38 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:37:38 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:37:38 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 00:37:32 UTC, ends at Tue 2023-12-12 00:37:44 UTC. --
	Dec 12 00:37:33 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:37:33 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:37:36 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:37:36 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:37:36 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:37:36 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:37:36 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:37:38 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:37:38 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:37:38 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:37:38 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:37:38 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1212 00:37:44.218182  104530 out.go:239] * 
	* 
	W1212 00:37:44.219022  104530 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:37:44.221199  104530 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-859606 --wait=true -v=8 --alsologtostderr --driver=kvm2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-859606 -n multinode-859606
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-859606 logs -n 25: (1.33359849s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-859606 cp multinode-859606-m02:/home/docker/cp-test.txt                       | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | multinode-859606:/home/docker/cp-test_multinode-859606-m02_multinode-859606.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-859606 ssh -n                                                                 | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | multinode-859606-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-859606 ssh -n multinode-859606 sudo cat                                       | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | /home/docker/cp-test_multinode-859606-m02_multinode-859606.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-859606 cp multinode-859606-m02:/home/docker/cp-test.txt                       | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | multinode-859606-m03:/home/docker/cp-test_multinode-859606-m02_multinode-859606-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-859606 ssh -n                                                                 | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | multinode-859606-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-859606 ssh -n multinode-859606-m03 sudo cat                                   | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | /home/docker/cp-test_multinode-859606-m02_multinode-859606-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-859606 cp testdata/cp-test.txt                                                | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | multinode-859606-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-859606 ssh -n                                                                 | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | multinode-859606-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-859606 cp multinode-859606-m03:/home/docker/cp-test.txt                       | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1229349573/001/cp-test_multinode-859606-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-859606 ssh -n                                                                 | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | multinode-859606-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-859606 cp multinode-859606-m03:/home/docker/cp-test.txt                       | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | multinode-859606:/home/docker/cp-test_multinode-859606-m03_multinode-859606.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-859606 ssh -n                                                                 | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | multinode-859606-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-859606 ssh -n multinode-859606 sudo cat                                       | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | /home/docker/cp-test_multinode-859606-m03_multinode-859606.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-859606 cp multinode-859606-m03:/home/docker/cp-test.txt                       | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | multinode-859606-m02:/home/docker/cp-test_multinode-859606-m03_multinode-859606-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-859606 ssh -n                                                                 | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | multinode-859606-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-859606 ssh -n multinode-859606-m02 sudo cat                                   | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	|         | /home/docker/cp-test_multinode-859606-m03_multinode-859606-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-859606 node stop m03                                                          | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
	| node    | multinode-859606 node start                                                             | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:33 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-859606                                                                | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC |                     |
	| stop    | -p multinode-859606                                                                     | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
	| start   | -p multinode-859606                                                                     | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:35 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-859606                                                                | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:35 UTC |                     |
	| node    | multinode-859606 node delete                                                            | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:35 UTC | 12 Dec 23 00:35 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-859606 stop                                                                   | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:35 UTC | 12 Dec 23 00:36 UTC |
	| start   | -p multinode-859606                                                                     | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:36 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:36:19
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:36:19.566152  104530 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:36:19.566265  104530 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:36:19.566273  104530 out.go:309] Setting ErrFile to fd 2...
	I1212 00:36:19.566277  104530 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:36:19.566462  104530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
	I1212 00:36:19.566987  104530 out.go:303] Setting JSON to false
	I1212 00:36:19.567880  104530 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11880,"bootTime":1702329500,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:36:19.567966  104530 start.go:138] virtualization: kvm guest
	I1212 00:36:19.570536  104530 out.go:177] * [multinode-859606] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:36:19.572060  104530 notify.go:220] Checking for updates...
	I1212 00:36:19.572071  104530 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:36:19.573648  104530 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:36:19.575043  104530 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:36:19.576502  104530 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	I1212 00:36:19.578073  104530 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:36:19.579463  104530 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:36:19.581288  104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:36:19.581767  104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:36:19.581821  104530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:36:19.596096  104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I1212 00:36:19.596488  104530 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:36:19.597060  104530 main.go:141] libmachine: Using API Version  1
	I1212 00:36:19.597091  104530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:36:19.597481  104530 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:36:19.597646  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:19.597948  104530 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:36:19.598247  104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:36:19.598293  104530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:36:19.612639  104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43365
	I1212 00:36:19.613044  104530 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:36:19.613494  104530 main.go:141] libmachine: Using API Version  1
	I1212 00:36:19.613515  104530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:36:19.613814  104530 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:36:19.613998  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:19.648526  104530 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:36:19.650074  104530 start.go:298] selected driver: kvm2
	I1212 00:36:19.650086  104530 start.go:902] validating driver "kvm2" against &{Name:multinode-859606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-859606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false ku
beflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:36:19.650266  104530 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:36:19.650710  104530 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:36:19.650794  104530 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17764-80294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:36:19.664949  104530 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 00:36:19.665848  104530 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:36:19.665938  104530 cni.go:84] Creating CNI manager for ""
	I1212 00:36:19.665955  104530 cni.go:136] 2 nodes found, recommending kindnet
	I1212 00:36:19.665965  104530 start_flags.go:323] config:
	{Name:multinode-859606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-859606 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false
nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:36:19.666224  104530 iso.go:125] acquiring lock: {Name:mk9f395cbf4246894893bf64341667bb412992c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:36:19.668183  104530 out.go:177] * Starting control plane node multinode-859606 in cluster multinode-859606
	I1212 00:36:19.669663  104530 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 00:36:19.669706  104530 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 00:36:19.669717  104530 cache.go:56] Caching tarball of preloaded images
	I1212 00:36:19.669796  104530 preload.go:174] Found /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 00:36:19.669808  104530 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 00:36:19.669923  104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
	I1212 00:36:19.670107  104530 start.go:365] acquiring machines lock for multinode-859606: {Name:mk381e91746c2e5b8a4620fe3fd447d80375e413 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:36:19.670157  104530 start.go:369] acquired machines lock for "multinode-859606" in 32.405µs
	I1212 00:36:19.670175  104530 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:36:19.670183  104530 fix.go:54] fixHost starting: 
	I1212 00:36:19.670424  104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:36:19.670455  104530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:36:19.684474  104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I1212 00:36:19.684891  104530 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:36:19.685333  104530 main.go:141] libmachine: Using API Version  1
	I1212 00:36:19.685356  104530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:36:19.685644  104530 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:36:19.685828  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:19.685946  104530 main.go:141] libmachine: (multinode-859606) Calling .GetState
	I1212 00:36:19.687411  104530 fix.go:102] recreateIfNeeded on multinode-859606: state=Stopped err=<nil>
	I1212 00:36:19.687443  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	W1212 00:36:19.687615  104530 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 00:36:19.689763  104530 out.go:177] * Restarting existing kvm2 VM for "multinode-859606" ...
	I1212 00:36:19.691324  104530 main.go:141] libmachine: (multinode-859606) Calling .Start
	I1212 00:36:19.691550  104530 main.go:141] libmachine: (multinode-859606) Ensuring networks are active...
	I1212 00:36:19.692253  104530 main.go:141] libmachine: (multinode-859606) Ensuring network default is active
	I1212 00:36:19.692574  104530 main.go:141] libmachine: (multinode-859606) Ensuring network mk-multinode-859606 is active
	I1212 00:36:19.692847  104530 main.go:141] libmachine: (multinode-859606) Getting domain xml...
	I1212 00:36:19.693505  104530 main.go:141] libmachine: (multinode-859606) Creating domain...
	I1212 00:36:20.929419  104530 main.go:141] libmachine: (multinode-859606) Waiting to get IP...
	I1212 00:36:20.930523  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:20.930912  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:20.931040  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:20.930906  104565 retry.go:31] will retry after 273.212272ms: waiting for machine to come up
	I1212 00:36:21.205460  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:21.205872  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:21.205901  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:21.205852  104565 retry.go:31] will retry after 326.892458ms: waiting for machine to come up
	I1212 00:36:21.534529  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:21.534921  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:21.534943  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:21.534891  104565 retry.go:31] will retry after 343.135816ms: waiting for machine to come up
	I1212 00:36:21.879459  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:21.879929  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:21.879953  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:21.879870  104565 retry.go:31] will retry after 589.671783ms: waiting for machine to come up
	I1212 00:36:22.471637  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:22.472097  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:22.472120  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:22.472073  104565 retry.go:31] will retry after 637.139279ms: waiting for machine to come up
	I1212 00:36:23.110881  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:23.111236  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:23.111267  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:23.111178  104565 retry.go:31] will retry after 745.620292ms: waiting for machine to come up
	I1212 00:36:23.858157  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:23.858677  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:23.858707  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:23.858634  104565 retry.go:31] will retry after 1.181130732s: waiting for machine to come up
	I1212 00:36:25.041534  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:25.041972  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:25.042004  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:25.041923  104565 retry.go:31] will retry after 1.339637741s: waiting for machine to come up
	I1212 00:36:26.383605  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:26.383992  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:26.384019  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:26.383923  104565 retry.go:31] will retry after 1.520765812s: waiting for machine to come up
	I1212 00:36:27.906937  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:27.907387  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:27.907415  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:27.907357  104565 retry.go:31] will retry after 1.874600317s: waiting for machine to come up
	I1212 00:36:29.783675  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:29.784134  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:29.784174  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:29.784075  104565 retry.go:31] will retry after 2.274077714s: waiting for machine to come up
	I1212 00:36:32.061527  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:32.061959  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:32.061986  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:32.061913  104565 retry.go:31] will retry after 3.21102487s: waiting for machine to come up
	I1212 00:36:35.274900  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:35.275327  104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
	I1212 00:36:35.275356  104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:35.275295  104565 retry.go:31] will retry after 4.00191762s: waiting for machine to come up
	I1212 00:36:39.281352  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.281835  104530 main.go:141] libmachine: (multinode-859606) Found IP for machine: 192.168.39.40
	I1212 00:36:39.281858  104530 main.go:141] libmachine: (multinode-859606) Reserving static IP address...
	I1212 00:36:39.281874  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has current primary IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.282305  104530 main.go:141] libmachine: (multinode-859606) Reserved static IP address: 192.168.39.40
	I1212 00:36:39.282362  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "multinode-859606", mac: "52:54:00:16:26:7f", ip: "192.168.39.40"} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.282382  104530 main.go:141] libmachine: (multinode-859606) Waiting for SSH to be available...
	I1212 00:36:39.282413  104530 main.go:141] libmachine: (multinode-859606) DBG | skip adding static IP to network mk-multinode-859606 - found existing host DHCP lease matching {name: "multinode-859606", mac: "52:54:00:16:26:7f", ip: "192.168.39.40"}
	I1212 00:36:39.282430  104530 main.go:141] libmachine: (multinode-859606) DBG | Getting to WaitForSSH function...
	I1212 00:36:39.284738  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.285057  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.285110  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.285169  104530 main.go:141] libmachine: (multinode-859606) DBG | Using SSH client type: external
	I1212 00:36:39.285210  104530 main.go:141] libmachine: (multinode-859606) DBG | Using SSH private key: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa (-rw-------)
	I1212 00:36:39.285247  104530 main.go:141] libmachine: (multinode-859606) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:36:39.285259  104530 main.go:141] libmachine: (multinode-859606) DBG | About to run SSH command:
	I1212 00:36:39.285268  104530 main.go:141] libmachine: (multinode-859606) DBG | exit 0
	I1212 00:36:39.375522  104530 main.go:141] libmachine: (multinode-859606) DBG | SSH cmd err, output: <nil>: 
	I1212 00:36:39.375955  104530 main.go:141] libmachine: (multinode-859606) Calling .GetConfigRaw
	I1212 00:36:39.376683  104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
	I1212 00:36:39.379083  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.379448  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.379483  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.379735  104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
	I1212 00:36:39.379953  104530 machine.go:88] provisioning docker machine ...
	I1212 00:36:39.379970  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:39.380177  104530 main.go:141] libmachine: (multinode-859606) Calling .GetMachineName
	I1212 00:36:39.380335  104530 buildroot.go:166] provisioning hostname "multinode-859606"
	I1212 00:36:39.380350  104530 main.go:141] libmachine: (multinode-859606) Calling .GetMachineName
	I1212 00:36:39.380483  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:39.382706  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.383084  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.383109  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.383231  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:39.383413  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.383548  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.383686  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:39.383852  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:36:39.384221  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1212 00:36:39.384236  104530 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-859606 && echo "multinode-859606" | sudo tee /etc/hostname
	I1212 00:36:39.519767  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-859606
	
	I1212 00:36:39.519800  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:39.522378  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.522790  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.522832  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.522956  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:39.523177  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.523364  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.523491  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:39.523659  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:36:39.523993  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1212 00:36:39.524011  104530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-859606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-859606/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-859606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:36:39.656285  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:36:39.656370  104530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17764-80294/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-80294/.minikube}
	I1212 00:36:39.656408  104530 buildroot.go:174] setting up certificates
	I1212 00:36:39.656417  104530 provision.go:83] configureAuth start
	I1212 00:36:39.656432  104530 main.go:141] libmachine: (multinode-859606) Calling .GetMachineName
	I1212 00:36:39.656702  104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
	I1212 00:36:39.659384  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.659735  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.659764  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.659868  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:39.662155  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.662517  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.662547  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.662670  104530 provision.go:138] copyHostCerts
	I1212 00:36:39.662701  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
	I1212 00:36:39.662745  104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem, removing ...
	I1212 00:36:39.662764  104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
	I1212 00:36:39.662840  104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem (1078 bytes)
	I1212 00:36:39.662932  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
	I1212 00:36:39.662954  104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem, removing ...
	I1212 00:36:39.662963  104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
	I1212 00:36:39.662998  104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem (1123 bytes)
	I1212 00:36:39.663072  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
	I1212 00:36:39.663106  104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem, removing ...
	I1212 00:36:39.663115  104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
	I1212 00:36:39.663149  104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem (1679 bytes)
	I1212 00:36:39.663211  104530 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem org=jenkins.multinode-859606 san=[192.168.39.40 192.168.39.40 localhost 127.0.0.1 minikube multinode-859606]
	I1212 00:36:39.752771  104530 provision.go:172] copyRemoteCerts
	I1212 00:36:39.752840  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:36:39.752864  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:39.755641  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.755981  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.756012  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.756148  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:39.756362  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.756505  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:39.756620  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
	I1212 00:36:39.848757  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:36:39.848827  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:36:39.872145  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:36:39.872230  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:36:39.895524  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:36:39.895625  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 00:36:39.919081  104530 provision.go:86] duration metric: configureAuth took 262.648578ms
	I1212 00:36:39.919117  104530 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:36:39.919362  104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:36:39.919392  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:39.919652  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:39.922322  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.922662  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:39.922694  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:39.922873  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:39.923053  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.923205  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:39.923322  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:39.923479  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:36:39.923797  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1212 00:36:39.923808  104530 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 00:36:40.049654  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 00:36:40.049683  104530 buildroot.go:70] root file system type: tmpfs
	I1212 00:36:40.049826  104530 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 00:36:40.049854  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:40.052273  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:40.052615  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:40.052648  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:40.052798  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:40.053014  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:40.053178  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:40.053328  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:40.053470  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:36:40.053822  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1212 00:36:40.053890  104530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 00:36:40.188800  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 00:36:40.188832  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:40.191559  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:40.191974  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:40.192007  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:40.192190  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:40.192371  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:40.192563  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:40.192665  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:40.192866  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:36:40.193267  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1212 00:36:40.193286  104530 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 00:36:41.206767  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 00:36:41.206800  104530 machine.go:91] provisioned docker machine in 1.826833328s
	I1212 00:36:41.206817  104530 start.go:300] post-start starting for "multinode-859606" (driver="kvm2")
	I1212 00:36:41.206830  104530 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:36:41.206852  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:41.207178  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:36:41.207202  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:41.209997  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.210348  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:41.210381  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.210498  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:41.210690  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:41.210833  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:41.210981  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
	I1212 00:36:41.301876  104530 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:36:41.306227  104530 command_runner.go:130] > NAME=Buildroot
	I1212 00:36:41.306246  104530 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 00:36:41.306250  104530 command_runner.go:130] > ID=buildroot
	I1212 00:36:41.306262  104530 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 00:36:41.306266  104530 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 00:36:41.306469  104530 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 00:36:41.306487  104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/addons for local assets ...
	I1212 00:36:41.306534  104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/files for local assets ...
	I1212 00:36:41.306599  104530 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> 876092.pem in /etc/ssl/certs
	I1212 00:36:41.306609  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> /etc/ssl/certs/876092.pem
	I1212 00:36:41.306693  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:36:41.315869  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /etc/ssl/certs/876092.pem (1708 bytes)
	I1212 00:36:41.338667  104530 start.go:303] post-start completed in 131.83456ms
	I1212 00:36:41.338691  104530 fix.go:56] fixHost completed within 21.668507657s
	I1212 00:36:41.338718  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:41.341292  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.341664  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:41.341694  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.341888  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:41.342101  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:41.342241  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:41.342408  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:41.342541  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:36:41.342886  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1212 00:36:41.342902  104530 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 00:36:41.468622  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702341401.415199028
	
	I1212 00:36:41.468653  104530 fix.go:206] guest clock: 1702341401.415199028
	I1212 00:36:41.468663  104530 fix.go:219] Guest: 2023-12-12 00:36:41.415199028 +0000 UTC Remote: 2023-12-12 00:36:41.338694258 +0000 UTC m=+21.821939649 (delta=76.50477ms)
	I1212 00:36:41.468688  104530 fix.go:190] guest clock delta is within tolerance: 76.50477ms
	I1212 00:36:41.468695  104530 start.go:83] releasing machines lock for "multinode-859606", held for 21.798528151s
	I1212 00:36:41.468721  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:41.469036  104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
	I1212 00:36:41.471587  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.471996  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:41.472029  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.472196  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:41.472679  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:41.472871  104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:36:41.472969  104530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:36:41.473018  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:41.473104  104530 ssh_runner.go:195] Run: cat /version.json
	I1212 00:36:41.473135  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:36:41.475372  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.475531  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.475739  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:41.475765  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.475949  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:41.475965  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:41.475979  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:41.476148  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:36:41.476167  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:41.476322  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:41.476325  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:36:41.476507  104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:36:41.476503  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
	I1212 00:36:41.476677  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
	I1212 00:36:41.586671  104530 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:36:41.587519  104530 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 00:36:41.587648  104530 ssh_runner.go:195] Run: systemctl --version
	I1212 00:36:41.593336  104530 command_runner.go:130] > systemd 247 (247)
	I1212 00:36:41.593360  104530 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 00:36:41.593423  104530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:36:41.598984  104530 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:36:41.599019  104530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:36:41.599060  104530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:36:41.614960  104530 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 00:36:41.614996  104530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:36:41.615008  104530 start.go:475] detecting cgroup driver to use...
	I1212 00:36:41.615155  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:36:41.631749  104530 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 00:36:41.632091  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 00:36:41.642135  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:36:41.651964  104530 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:36:41.652033  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:36:41.661909  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:36:41.672216  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:36:41.681323  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:36:41.691358  104530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:36:41.701487  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:36:41.711473  104530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:36:41.720346  104530 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:36:41.720490  104530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:36:41.729603  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:41.829613  104530 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:36:41.846807  104530 start.go:475] detecting cgroup driver to use...
	I1212 00:36:41.846894  104530 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 00:36:41.859661  104530 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 00:36:41.860603  104530 command_runner.go:130] > [Unit]
	I1212 00:36:41.860621  104530 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 00:36:41.860629  104530 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 00:36:41.860638  104530 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 00:36:41.860648  104530 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 00:36:41.860662  104530 command_runner.go:130] > StartLimitBurst=3
	I1212 00:36:41.860671  104530 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 00:36:41.860679  104530 command_runner.go:130] > [Service]
	I1212 00:36:41.860686  104530 command_runner.go:130] > Type=notify
	I1212 00:36:41.860694  104530 command_runner.go:130] > Restart=on-failure
	I1212 00:36:41.860715  104530 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 00:36:41.860734  104530 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 00:36:41.860748  104530 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 00:36:41.860757  104530 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 00:36:41.860767  104530 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 00:36:41.860781  104530 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 00:36:41.860791  104530 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 00:36:41.860803  104530 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 00:36:41.860812  104530 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 00:36:41.860818  104530 command_runner.go:130] > ExecStart=
	I1212 00:36:41.860837  104530 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1212 00:36:41.860845  104530 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 00:36:41.860854  104530 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 00:36:41.860863  104530 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 00:36:41.860867  104530 command_runner.go:130] > LimitNOFILE=infinity
	I1212 00:36:41.860872  104530 command_runner.go:130] > LimitNPROC=infinity
	I1212 00:36:41.860876  104530 command_runner.go:130] > LimitCORE=infinity
	I1212 00:36:41.860881  104530 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 00:36:41.860886  104530 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 00:36:41.860893  104530 command_runner.go:130] > TasksMax=infinity
	I1212 00:36:41.860897  104530 command_runner.go:130] > TimeoutStartSec=0
	I1212 00:36:41.860903  104530 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 00:36:41.860907  104530 command_runner.go:130] > Delegate=yes
	I1212 00:36:41.860912  104530 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 00:36:41.860916  104530 command_runner.go:130] > KillMode=process
	I1212 00:36:41.860921  104530 command_runner.go:130] > [Install]
	I1212 00:36:41.860934  104530 command_runner.go:130] > WantedBy=multi-user.target
	I1212 00:36:41.861408  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:36:41.875266  104530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:36:41.894559  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:36:41.907084  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:36:41.919502  104530 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:36:41.951570  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:36:41.963632  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:36:41.980713  104530 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 00:36:41.980788  104530 ssh_runner.go:195] Run: which cri-dockerd
	I1212 00:36:41.984334  104530 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 00:36:41.984645  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 00:36:41.993852  104530 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 00:36:42.009538  104530 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 00:36:42.118265  104530 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 00:36:42.228976  104530 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 00:36:42.229126  104530 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 00:36:42.245311  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:42.345292  104530 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 00:36:43.830127  104530 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.484785426s)
	I1212 00:36:43.830211  104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 00:36:43.943279  104530 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 00:36:44.053942  104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 00:36:44.164844  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:44.275934  104530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 00:36:44.291963  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:36:44.392776  104530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 00:36:44.474244  104530 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 00:36:44.474311  104530 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 00:36:44.480515  104530 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 00:36:44.480535  104530 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:36:44.480541  104530 command_runner.go:130] > Device: 16h/22d	Inode: 819         Links: 1
	I1212 00:36:44.480548  104530 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 00:36:44.480554  104530 command_runner.go:130] > Access: 2023-12-12 00:36:44.352977075 +0000
	I1212 00:36:44.480559  104530 command_runner.go:130] > Modify: 2023-12-12 00:36:44.352977075 +0000
	I1212 00:36:44.480564  104530 command_runner.go:130] > Change: 2023-12-12 00:36:44.355977075 +0000
	I1212 00:36:44.480567  104530 command_runner.go:130] >  Birth: -
	I1212 00:36:44.480717  104530 start.go:543] Will wait 60s for crictl version
	I1212 00:36:44.480773  104530 ssh_runner.go:195] Run: which crictl
	I1212 00:36:44.484627  104530 command_runner.go:130] > /usr/bin/crictl
	I1212 00:36:44.484837  104530 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:36:44.546652  104530 command_runner.go:130] > Version:  0.1.0
	I1212 00:36:44.546684  104530 command_runner.go:130] > RuntimeName:  docker
	I1212 00:36:44.546692  104530 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 00:36:44.546719  104530 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:36:44.548311  104530 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 00:36:44.548389  104530 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 00:36:44.576456  104530 command_runner.go:130] > 24.0.7
	I1212 00:36:44.576586  104530 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 00:36:44.599730  104530 command_runner.go:130] > 24.0.7
	I1212 00:36:44.602571  104530 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 00:36:44.602615  104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
	I1212 00:36:44.605105  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:44.605567  104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:36:44.605594  104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:36:44.605828  104530 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:36:44.609867  104530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:36:44.622768  104530 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 00:36:44.622818  104530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 00:36:44.642692  104530 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 00:36:44.642720  104530 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 00:36:44.642729  104530 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 00:36:44.642749  104530 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 00:36:44.642756  104530 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1212 00:36:44.642764  104530 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 00:36:44.642773  104530 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 00:36:44.642785  104530 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 00:36:44.642793  104530 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:36:44.642804  104530 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1212 00:36:44.642841  104530 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1212 00:36:44.642858  104530 docker.go:601] Images already preloaded, skipping extraction
	I1212 00:36:44.642930  104530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 00:36:44.661008  104530 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 00:36:44.661047  104530 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 00:36:44.661054  104530 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 00:36:44.661062  104530 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 00:36:44.661068  104530 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1212 00:36:44.661084  104530 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 00:36:44.661093  104530 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 00:36:44.661108  104530 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 00:36:44.661116  104530 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:36:44.661126  104530 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1212 00:36:44.661894  104530 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1212 00:36:44.661911  104530 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:36:44.661965  104530 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 00:36:44.688198  104530 command_runner.go:130] > cgroupfs
	I1212 00:36:44.688431  104530 cni.go:84] Creating CNI manager for ""
	I1212 00:36:44.688451  104530 cni.go:136] 2 nodes found, recommending kindnet
	I1212 00:36:44.688483  104530 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 00:36:44.688527  104530 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.40 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-859606 NodeName:multinode-859606 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.40"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.40 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:36:44.688714  104530 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.40
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-859606"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.40
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.40"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:36:44.688816  104530 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-859606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-859606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 00:36:44.688879  104530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 00:36:44.697808  104530 command_runner.go:130] > kubeadm
	I1212 00:36:44.697826  104530 command_runner.go:130] > kubectl
	I1212 00:36:44.697831  104530 command_runner.go:130] > kubelet
	I1212 00:36:44.697894  104530 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:36:44.697957  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:36:44.705971  104530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 00:36:44.720935  104530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:36:44.735886  104530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1212 00:36:44.751846  104530 ssh_runner.go:195] Run: grep 192.168.39.40	control-plane.minikube.internal$ /etc/hosts
	I1212 00:36:44.755479  104530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.40	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:36:44.767240  104530 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606 for IP: 192.168.39.40
	I1212 00:36:44.767277  104530 certs.go:190] acquiring lock for shared ca certs: {Name:mk30ad7b34272eb8ac2c2d0da18d8d4f87fa28a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:36:44.767442  104530 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.key
	I1212 00:36:44.767492  104530 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.key
	I1212 00:36:44.767569  104530 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.key
	I1212 00:36:44.767614  104530 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.key.7fcbe345
	I1212 00:36:44.767658  104530 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.key
	I1212 00:36:44.767671  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:36:44.767685  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:36:44.767697  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:36:44.767709  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:36:44.767723  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:36:44.767736  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:36:44.767748  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:36:44.767759  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:36:44.767806  104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609.pem (1338 bytes)
	W1212 00:36:44.767833  104530 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609_empty.pem, impossibly tiny 0 bytes
	I1212 00:36:44.767842  104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:36:44.767866  104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:36:44.767895  104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:36:44.767941  104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem (1679 bytes)
	I1212 00:36:44.767991  104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem (1708 bytes)
	I1212 00:36:44.768017  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> /usr/share/ca-certificates/876092.pem
	I1212 00:36:44.768033  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:44.768048  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609.pem -> /usr/share/ca-certificates/87609.pem
	I1212 00:36:44.768657  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 00:36:44.791629  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:36:44.814579  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:36:44.837176  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:36:44.859769  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:36:44.882517  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:36:44.905279  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:36:44.927814  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:36:44.950936  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /usr/share/ca-certificates/876092.pem (1708 bytes)
	I1212 00:36:44.973314  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:36:44.995879  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609.pem --> /usr/share/ca-certificates/87609.pem (1338 bytes)
	I1212 00:36:45.018814  104530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:36:45.034741  104530 ssh_runner.go:195] Run: openssl version
	I1212 00:36:45.040084  104530 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 00:36:45.040159  104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:36:45.049710  104530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:45.054223  104530 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:45.054253  104530 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:45.054292  104530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:36:45.059527  104530 command_runner.go:130] > b5213941
	I1212 00:36:45.059696  104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:36:45.069012  104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/87609.pem && ln -fs /usr/share/ca-certificates/87609.pem /etc/ssl/certs/87609.pem"
	I1212 00:36:45.078693  104530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/87609.pem
	I1212 00:36:45.083070  104530 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:16 /usr/share/ca-certificates/87609.pem
	I1212 00:36:45.083289  104530 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:16 /usr/share/ca-certificates/87609.pem
	I1212 00:36:45.083354  104530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/87609.pem
	I1212 00:36:45.089122  104530 command_runner.go:130] > 51391683
	I1212 00:36:45.089194  104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/87609.pem /etc/ssl/certs/51391683.0"
	I1212 00:36:45.099154  104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/876092.pem && ln -fs /usr/share/ca-certificates/876092.pem /etc/ssl/certs/876092.pem"
	I1212 00:36:45.108823  104530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/876092.pem
	I1212 00:36:45.113316  104530 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:16 /usr/share/ca-certificates/876092.pem
	I1212 00:36:45.113568  104530 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:16 /usr/share/ca-certificates/876092.pem
	I1212 00:36:45.113613  104530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/876092.pem
	I1212 00:36:45.118966  104530 command_runner.go:130] > 3ec20f2e
	I1212 00:36:45.119043  104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/876092.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:36:45.128635  104530 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 00:36:45.132978  104530 command_runner.go:130] > ca.crt
	I1212 00:36:45.132994  104530 command_runner.go:130] > ca.key
	I1212 00:36:45.133000  104530 command_runner.go:130] > healthcheck-client.crt
	I1212 00:36:45.133004  104530 command_runner.go:130] > healthcheck-client.key
	I1212 00:36:45.133008  104530 command_runner.go:130] > peer.crt
	I1212 00:36:45.133014  104530 command_runner.go:130] > peer.key
	I1212 00:36:45.133018  104530 command_runner.go:130] > server.crt
	I1212 00:36:45.133022  104530 command_runner.go:130] > server.key
	I1212 00:36:45.133062  104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:36:45.138700  104530 command_runner.go:130] > Certificate will not expire
	I1212 00:36:45.138753  104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:36:45.143928  104530 command_runner.go:130] > Certificate will not expire
	I1212 00:36:45.143989  104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:36:45.149974  104530 command_runner.go:130] > Certificate will not expire
	I1212 00:36:45.150040  104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:36:45.155645  104530 command_runner.go:130] > Certificate will not expire
	I1212 00:36:45.155702  104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:36:45.161120  104530 command_runner.go:130] > Certificate will not expire
	I1212 00:36:45.161172  104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:36:45.166435  104530 command_runner.go:130] > Certificate will not expire
	I1212 00:36:45.166596  104530 kubeadm.go:404] StartCluster: {Name:multinode-859606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-859606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubev
irt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:36:45.166771  104530 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 00:36:45.186362  104530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:36:45.195450  104530 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 00:36:45.195478  104530 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 00:36:45.195486  104530 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 00:36:45.195492  104530 command_runner.go:130] > member
	I1212 00:36:45.195591  104530 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 00:36:45.195612  104530 kubeadm.go:636] restartCluster start
	I1212 00:36:45.195674  104530 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:36:45.205557  104530 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:45.205994  104530 kubeconfig.go:135] verify returned: extract IP: "multinode-859606" does not appear in /home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:36:45.206105  104530 kubeconfig.go:146] "multinode-859606" context is missing from /home/jenkins/minikube-integration/17764-80294/kubeconfig - will repair!
	I1212 00:36:45.206407  104530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-80294/kubeconfig: {Name:mkf7cdfdedbee22114abcb4b16af22e84438f3f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:36:45.206781  104530 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:36:45.207021  104530 kapi.go:59] client config for multinode-859606: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.key", CAFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:36:45.207626  104530 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 00:36:45.207759  104530 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:36:45.216109  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:45.216158  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:45.227128  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:45.227145  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:45.227181  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:45.237721  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:45.738433  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:45.738513  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:45.749916  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:46.238556  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:46.238626  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:46.249796  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:46.738436  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:46.738510  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:46.750275  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:47.238820  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:47.238918  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:47.250330  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:47.737880  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:47.737967  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:47.749173  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:48.238871  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:48.238981  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:48.250477  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:48.737907  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:48.737986  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:48.749969  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:49.238635  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:49.238729  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:49.250296  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:49.738397  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:49.738483  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:49.750014  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:50.238638  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:50.238725  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:50.250537  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:50.738104  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:50.738212  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:50.749728  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:51.238279  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:51.238383  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:51.249977  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:51.738590  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:51.738674  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:51.750353  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:52.237967  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:52.238033  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:52.249749  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:52.738311  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:52.738400  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:52.749734  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:53.238473  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:53.238570  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:53.249803  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:53.738439  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:53.738545  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:53.749846  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:54.238458  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:54.238551  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:54.250276  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:54.738396  104530 api_server.go:166] Checking apiserver status ...
	I1212 00:36:54.738477  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 00:36:54.749594  104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:36:55.216372  104530 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 00:36:55.216413  104530 kubeadm.go:1135] stopping kube-system containers ...
	I1212 00:36:55.216471  104530 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 00:36:55.242800  104530 command_runner.go:130] > abde5ad85d4a
	I1212 00:36:55.242825  104530 command_runner.go:130] > 6960e84b00b8
	I1212 00:36:55.242831  104530 command_runner.go:130] > 55413175770e
	I1212 00:36:55.242840  104530 command_runner.go:130] > 56fd6254d6e1
	I1212 00:36:55.242847  104530 command_runner.go:130] > b63a75f45416
	I1212 00:36:55.242852  104530 command_runner.go:130] > 19421dc21753
	I1212 00:36:55.242858  104530 command_runner.go:130] > ecfcbd586321
	I1212 00:36:55.242864  104530 command_runner.go:130] > 9767a413586e
	I1212 00:36:55.242869  104530 command_runner.go:130] > 4ba778c674f0
	I1212 00:36:55.242874  104530 command_runner.go:130] > 19f9d76e8f1c
	I1212 00:36:55.242880  104530 command_runner.go:130] > fc27b8583502
	I1212 00:36:55.242885  104530 command_runner.go:130] > a49117d4a4c8
	I1212 00:36:55.242891  104530 command_runner.go:130] > 5aa25d818283
	I1212 00:36:55.242897  104530 command_runner.go:130] > ed0cff49857f
	I1212 00:36:55.242904  104530 command_runner.go:130] > 510b18b7b6d6
	I1212 00:36:55.242914  104530 command_runner.go:130] > 34ac7e63ee51
	I1212 00:36:55.242922  104530 command_runner.go:130] > dc5d8378ca26
	I1212 00:36:55.242929  104530 command_runner.go:130] > 335bd2869121
	I1212 00:36:55.242939  104530 command_runner.go:130] > 10ca85c531dc
	I1212 00:36:55.242951  104530 command_runner.go:130] > dcead5249b2f
	I1212 00:36:55.242961  104530 command_runner.go:130] > c3360b039380
	I1212 00:36:55.242971  104530 command_runner.go:130] > 08edfeaa5cab
	I1212 00:36:55.242979  104530 command_runner.go:130] > 5c674269e2eb
	I1212 00:36:55.242986  104530 command_runner.go:130] > e80fc43dacae
	I1212 00:36:55.242994  104530 command_runner.go:130] > 547ce8660107
	I1212 00:36:55.243001  104530 command_runner.go:130] > 6fce6e649e1a
	I1212 00:36:55.243008  104530 command_runner.go:130] > 7db8deb95763
	I1212 00:36:55.243015  104530 command_runner.go:130] > fef547bfcef9
	I1212 00:36:55.243026  104530 command_runner.go:130] > afcf416fd476
	I1212 00:36:55.243035  104530 command_runner.go:130] > d42aca9dd643
	I1212 00:36:55.243041  104530 command_runner.go:130] > 757215f5e48f
	I1212 00:36:55.243048  104530 command_runner.go:130] > f785241ab5c9
	I1212 00:36:55.243103  104530 docker.go:469] Stopping containers: [abde5ad85d4a 6960e84b00b8 55413175770e 56fd6254d6e1 b63a75f45416 19421dc21753 ecfcbd586321 9767a413586e 4ba778c674f0 19f9d76e8f1c fc27b8583502 a49117d4a4c8 5aa25d818283 ed0cff49857f 510b18b7b6d6 34ac7e63ee51 dc5d8378ca26 335bd2869121 10ca85c531dc dcead5249b2f c3360b039380 08edfeaa5cab 5c674269e2eb e80fc43dacae 547ce8660107 6fce6e649e1a 7db8deb95763 fef547bfcef9 afcf416fd476 d42aca9dd643 757215f5e48f f785241ab5c9]
	I1212 00:36:55.243180  104530 ssh_runner.go:195] Run: docker stop abde5ad85d4a 6960e84b00b8 55413175770e 56fd6254d6e1 b63a75f45416 19421dc21753 ecfcbd586321 9767a413586e 4ba778c674f0 19f9d76e8f1c fc27b8583502 a49117d4a4c8 5aa25d818283 ed0cff49857f 510b18b7b6d6 34ac7e63ee51 dc5d8378ca26 335bd2869121 10ca85c531dc dcead5249b2f c3360b039380 08edfeaa5cab 5c674269e2eb e80fc43dacae 547ce8660107 6fce6e649e1a 7db8deb95763 fef547bfcef9 afcf416fd476 d42aca9dd643 757215f5e48f f785241ab5c9
	I1212 00:36:55.267560  104530 command_runner.go:130] > abde5ad85d4a
	I1212 00:36:55.267589  104530 command_runner.go:130] > 6960e84b00b8
	I1212 00:36:55.267595  104530 command_runner.go:130] > 55413175770e
	I1212 00:36:55.267601  104530 command_runner.go:130] > 56fd6254d6e1
	I1212 00:36:55.267608  104530 command_runner.go:130] > b63a75f45416
	I1212 00:36:55.267613  104530 command_runner.go:130] > 19421dc21753
	I1212 00:36:55.267630  104530 command_runner.go:130] > ecfcbd586321
	I1212 00:36:55.267637  104530 command_runner.go:130] > 9767a413586e
	I1212 00:36:55.267643  104530 command_runner.go:130] > 4ba778c674f0
	I1212 00:36:55.267650  104530 command_runner.go:130] > 19f9d76e8f1c
	I1212 00:36:55.267656  104530 command_runner.go:130] > fc27b8583502
	I1212 00:36:55.267666  104530 command_runner.go:130] > a49117d4a4c8
	I1212 00:36:55.267672  104530 command_runner.go:130] > 5aa25d818283
	I1212 00:36:55.267679  104530 command_runner.go:130] > ed0cff49857f
	I1212 00:36:55.267707  104530 command_runner.go:130] > 510b18b7b6d6
	I1212 00:36:55.267723  104530 command_runner.go:130] > 34ac7e63ee51
	I1212 00:36:55.267729  104530 command_runner.go:130] > dc5d8378ca26
	I1212 00:36:55.267735  104530 command_runner.go:130] > 335bd2869121
	I1212 00:36:55.267742  104530 command_runner.go:130] > 10ca85c531dc
	I1212 00:36:55.267757  104530 command_runner.go:130] > dcead5249b2f
	I1212 00:36:55.267764  104530 command_runner.go:130] > c3360b039380
	I1212 00:36:55.267770  104530 command_runner.go:130] > 08edfeaa5cab
	I1212 00:36:55.267779  104530 command_runner.go:130] > 5c674269e2eb
	I1212 00:36:55.267785  104530 command_runner.go:130] > e80fc43dacae
	I1212 00:36:55.267798  104530 command_runner.go:130] > 547ce8660107
	I1212 00:36:55.267807  104530 command_runner.go:130] > 6fce6e649e1a
	I1212 00:36:55.267816  104530 command_runner.go:130] > 7db8deb95763
	I1212 00:36:55.267825  104530 command_runner.go:130] > fef547bfcef9
	I1212 00:36:55.267834  104530 command_runner.go:130] > afcf416fd476
	I1212 00:36:55.267843  104530 command_runner.go:130] > d42aca9dd643
	I1212 00:36:55.267852  104530 command_runner.go:130] > 757215f5e48f
	I1212 00:36:55.267861  104530 command_runner.go:130] > f785241ab5c9
	I1212 00:36:55.268959  104530 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:36:55.283176  104530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:36:55.291931  104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 00:36:55.291964  104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 00:36:55.291973  104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 00:36:55.291980  104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:36:55.292025  104530 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:36:55.292077  104530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:36:55.300972  104530 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 00:36:55.300994  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:55.409847  104530 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:36:55.410210  104530 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 00:36:55.410700  104530 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 00:36:55.411130  104530 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:36:55.411654  104530 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1212 00:36:55.412107  104530 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:36:55.413059  104530 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1212 00:36:55.413464  104530 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1212 00:36:55.413846  104530 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:36:55.414303  104530 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:36:55.414667  104530 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:36:55.416560  104530 command_runner.go:130] > [certs] Using the existing "sa" key
	I1212 00:36:55.416642  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:56.211128  104530 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:36:56.211154  104530 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:36:56.211167  104530 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:36:56.211176  104530 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:36:56.211190  104530 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:36:56.211225  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:56.277692  104530 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:36:56.278847  104530 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:36:56.278889  104530 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 00:36:56.393138  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:56.490674  104530 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:36:56.490707  104530 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:36:56.495141  104530 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:36:56.496969  104530 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:36:56.505734  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:36:56.568063  104530 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:36:56.574809  104530 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:36:56.574879  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:56.587806  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:57.100023  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:57.600145  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:58.099727  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:58.599716  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:59.099714  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:36:59.599934  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:00.099594  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:00.117319  104530 command_runner.go:130] > 1800
	I1212 00:37:00.117686  104530 api_server.go:72] duration metric: took 3.542880083s to wait for apiserver process to appear ...
	I1212 00:37:00.117709  104530 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:37:00.117727  104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:37:02.771626  104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:37:02.771661  104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:37:02.771677  104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:37:02.838010  104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:37:02.838048  104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:37:03.338843  104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:37:03.344825  104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 00:37:03.344863  104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 00:37:03.838231  104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:37:03.845511  104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 00:37:03.845548  104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 00:37:04.339177  104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:37:04.344349  104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 200:
	ok
	I1212 00:37:04.344445  104530 round_trippers.go:463] GET https://192.168.39.40:8443/version
	I1212 00:37:04.344456  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:04.344469  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:04.344482  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:04.352515  104530 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 00:37:04.352546  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:04.352557  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:04.352567  104530 round_trippers.go:580]     Content-Length: 264
	I1212 00:37:04.352575  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:04 GMT
	I1212 00:37:04.352584  104530 round_trippers.go:580]     Audit-Id: 63ee9643-66fd-4e1a-a212-0e71234e47a2
	I1212 00:37:04.352591  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:04.352598  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:04.352608  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:04.352649  104530 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 00:37:04.352786  104530 api_server.go:141] control plane version: v1.28.4
	I1212 00:37:04.352817  104530 api_server.go:131] duration metric: took 4.235100574s to wait for apiserver health ...
	I1212 00:37:04.352829  104530 cni.go:84] Creating CNI manager for ""
	I1212 00:37:04.352840  104530 cni.go:136] 2 nodes found, recommending kindnet
	I1212 00:37:04.355105  104530 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 00:37:04.356881  104530 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 00:37:04.363840  104530 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 00:37:04.363876  104530 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 00:37:04.363888  104530 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 00:37:04.363897  104530 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:37:04.363932  104530 command_runner.go:130] > Access: 2023-12-12 00:36:32.475977075 +0000
	I1212 00:37:04.363942  104530 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 00:37:04.363949  104530 command_runner.go:130] > Change: 2023-12-12 00:36:30.674977075 +0000
	I1212 00:37:04.363955  104530 command_runner.go:130] >  Birth: -
	I1212 00:37:04.364014  104530 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 00:37:04.364031  104530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 00:37:04.384536  104530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 00:37:05.836837  104530 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 00:37:05.848426  104530 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 00:37:05.852488  104530 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 00:37:05.879402  104530 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 00:37:05.888362  104530 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.503791012s)
	I1212 00:37:05.888392  104530 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:37:05.888502  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:05.888513  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:05.888524  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:05.888534  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:05.893619  104530 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:37:05.893657  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:05.893666  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:05.893674  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:05.893682  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:05.893690  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:05.893699  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:05 GMT
	I1212 00:37:05.893708  104530 round_trippers.go:580]     Audit-Id: 0f783734-4de0-49f4-945d-a630ecccf305
	I1212 00:37:05.895980  104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1199"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
	I1212 00:37:05.900061  104530 system_pods.go:59] 12 kube-system pods found
	I1212 00:37:05.900092  104530 system_pods.go:61] "coredns-5dd5756b68-t9jz8" [3605a003-e8d6-46b2-8fe7-f45647656622] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:37:05.900101  104530 system_pods.go:61] "etcd-multinode-859606" [7d6ae370-b910-4aef-8729-e141b307ae17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 00:37:05.900106  104530 system_pods.go:61] "kindnet-9slwc" [6b37daf7-e9d5-47c5-ae94-01150282b6cf] Running
	I1212 00:37:05.900109  104530 system_pods.go:61] "kindnet-d4q52" [35ed1c56-7487-4b6d-ab1f-b5cfe6502739] Running
	I1212 00:37:05.900116  104530 system_pods.go:61] "kindnet-x2g5d" [c1dab004-2557-4b4f-975b-bd0b5a8f4d90] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 00:37:05.900123  104530 system_pods.go:61] "kube-apiserver-multinode-859606" [0060efa7-dc06-439e-878f-b93b0e016326] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 00:37:05.900135  104530 system_pods.go:61] "kube-controller-manager-multinode-859606" [901bf3ab-f34d-42c8-b1da-d5431ae0219f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:37:05.900155  104530 system_pods.go:61] "kube-proxy-6f6zz" [d5931621-47fd-4f1a-bf46-813dd8352f00] Running
	I1212 00:37:05.900164  104530 system_pods.go:61] "kube-proxy-prf7f" [8238226c-3d01-4b91-963b-7360206b8615] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 00:37:05.900171  104530 system_pods.go:61] "kube-proxy-q9h26" [7dd12033-bf81-4cd3-a412-3fe3211dc87b] Running
	I1212 00:37:05.900176  104530 system_pods.go:61] "kube-scheduler-multinode-859606" [19a4264c-6ba5-44f4-8419-6f04d6224c92] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:37:05.900188  104530 system_pods.go:61] "storage-provisioner" [a021db21-b335-4c05-8e32-808642dbb72e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:37:05.900194  104530 system_pods.go:74] duration metric: took 11.796772ms to wait for pod list to return data ...
	I1212 00:37:05.900203  104530 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:37:05.900268  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes
	I1212 00:37:05.900277  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:05.900284  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:05.900293  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:05.902944  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:05.902977  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:05.902987  104530 round_trippers.go:580]     Audit-Id: 81b09a2b-85f5-497e-b79a-4f9569b9a2e7
	I1212 00:37:05.903000  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:05.903011  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:05.903018  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:05.903031  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:05.903044  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:05 GMT
	I1212 00:37:05.903213  104530 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1199"},"items":[{"metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10135 chars]
	I1212 00:37:05.903891  104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 00:37:05.903937  104530 node_conditions.go:123] node cpu capacity is 2
	I1212 00:37:05.903961  104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 00:37:05.903967  104530 node_conditions.go:123] node cpu capacity is 2
	I1212 00:37:05.903974  104530 node_conditions.go:105] duration metric: took 3.766372ms to run NodePressure ...
	I1212 00:37:05.903993  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:37:06.226936  104530 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 00:37:06.226983  104530 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 00:37:06.227046  104530 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 00:37:06.227181  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1212 00:37:06.227195  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.227207  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.227216  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.231116  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:06.231139  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.231148  104530 round_trippers.go:580]     Audit-Id: 69442a0f-0400-4b49-b627-328626316be1
	I1212 00:37:06.231157  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.231166  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.231175  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.231194  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.231203  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.231655  104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1202"},"items":[{"metadata":{"name":"etcd-multinode-859606","namespace":"kube-system","uid":"7d6ae370-b910-4aef-8729-e141b307ae17","resourceVersion":"1175","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.40:2379","kubernetes.io/config.hash":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.mirror":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.seen":"2023-12-12T00:30:03.645880014Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29766 chars]
	I1212 00:37:06.233034  104530 kubeadm.go:787] kubelet initialised
	I1212 00:37:06.233057  104530 kubeadm.go:788] duration metric: took 5.989168ms waiting for restarted kubelet to initialise ...
	I1212 00:37:06.233070  104530 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:37:06.233145  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:06.233158  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.233168  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.233176  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.237466  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:06.237487  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.237497  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.237506  104530 round_trippers.go:580]     Audit-Id: 39c8852d-e60c-4370-870d-ec951e0b6883
	I1212 00:37:06.237515  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.237528  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.237540  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.237548  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.238857  104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1202"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
	I1212 00:37:06.242660  104530 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:06.242743  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:06.242753  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.242767  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.242780  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.245902  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:06.245916  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.245922  104530 round_trippers.go:580]     Audit-Id: 992a9c9e-aaec-49ae-b76c-09a84a7382e6
	I1212 00:37:06.245937  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.245952  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.245967  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.245974  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.245983  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.246223  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:06.246613  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:06.246627  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.246633  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.246640  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.248752  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:06.248771  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.248780  104530 round_trippers.go:580]     Audit-Id: e035e5e3-4a98-439c-b13b-fca81955f3e3
	I1212 00:37:06.248788  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.248796  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.248805  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.248820  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.248828  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.249002  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:06.249315  104530 pod_ready.go:97] node "multinode-859606" hosting pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.249335  104530 pod_ready.go:81] duration metric: took 6.646085ms waiting for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:06.249343  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.249367  104530 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:06.249423  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-859606
	I1212 00:37:06.249431  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.249441  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.249459  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.251411  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:06.251431  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.251445  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.251453  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.251462  104530 round_trippers.go:580]     Audit-Id: 78646abe-5066-4ba6-8d95-ec6fa44a1ab7
	I1212 00:37:06.251469  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.251476  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.251486  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.251707  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-859606","namespace":"kube-system","uid":"7d6ae370-b910-4aef-8729-e141b307ae17","resourceVersion":"1175","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.40:2379","kubernetes.io/config.hash":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.mirror":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.seen":"2023-12-12T00:30:03.645880014Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6296 chars]
	I1212 00:37:06.252098  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:06.252112  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.252121  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.252127  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.254083  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:06.254103  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.254111  104530 round_trippers.go:580]     Audit-Id: 55b0d2ca-975d-4309-84a7-7cb9b1d8e361
	I1212 00:37:06.254120  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.254128  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.254136  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.254144  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.254152  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.254323  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:06.254602  104530 pod_ready.go:97] node "multinode-859606" hosting pod "etcd-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.254619  104530 pod_ready.go:81] duration metric: took 5.239063ms waiting for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:06.254626  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "etcd-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.254639  104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:06.254698  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-859606
	I1212 00:37:06.254708  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.254715  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.254727  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.256930  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:06.256949  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.256958  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.256967  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.256974  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.256983  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.256991  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.257005  104530 round_trippers.go:580]     Audit-Id: aa63f562-c9c3-453f-92e9-d6a4c4b3232f
	I1212 00:37:06.257170  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-859606","namespace":"kube-system","uid":"0060efa7-dc06-439e-878f-b93b0e016326","resourceVersion":"1177","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.40:8443","kubernetes.io/config.hash":"6579d881f0553848179768317ac84853","kubernetes.io/config.mirror":"6579d881f0553848179768317ac84853","kubernetes.io/config.seen":"2023-12-12T00:29:55.207817853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1212 00:37:06.257538  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:06.257552  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.257558  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.257564  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.259425  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:06.259445  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.259455  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.259463  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.259471  104530 round_trippers.go:580]     Audit-Id: 6b47a0d5-4136-488c-882b-b7fdd50344ce
	I1212 00:37:06.259479  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.259487  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.259495  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.259782  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:06.260081  104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-apiserver-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.260097  104530 pod_ready.go:81] duration metric: took 5.449955ms waiting for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:06.260103  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-apiserver-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.260113  104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:06.260178  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:06.260188  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.260196  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.260209  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.262963  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:06.262979  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.262988  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.262996  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.263012  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.263024  104530 round_trippers.go:580]     Audit-Id: eb54b9e3-39c5-4e0b-975b-d574f9443f33
	I1212 00:37:06.263034  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.263051  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.263697  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:06.289336  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:06.289371  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.289380  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.289385  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.292233  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:06.292251  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.292257  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.292263  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.292268  104530 round_trippers.go:580]     Audit-Id: 436076e3-8b39-45e2-80a6-f8f174ee0ea6
	I1212 00:37:06.292273  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.292280  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.292288  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.292641  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:06.293036  104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-controller-manager-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.293058  104530 pod_ready.go:81] duration metric: took 32.933264ms waiting for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:06.293071  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-controller-manager-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:06.293082  104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:06.489501  104530 request.go:629] Waited for 196.342403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6f6zz
	I1212 00:37:06.489581  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6f6zz
	I1212 00:37:06.489586  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.489598  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.489608  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.493034  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:06.493071  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.493081  104530 round_trippers.go:580]     Audit-Id: 0957bc6a-2f51-41b9-a929-11d0c801edd6
	I1212 00:37:06.493089  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.493098  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.493113  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.493126  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.493134  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.493829  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6f6zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d5931621-47fd-4f1a-bf46-813dd8352f00","resourceVersion":"1087","creationTimestamp":"2023-12-12T00:32:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:32:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I1212 00:37:06.688623  104530 request.go:629] Waited for 194.307311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m03
	I1212 00:37:06.688686  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m03
	I1212 00:37:06.688690  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.688698  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.688704  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.691344  104530 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1212 00:37:06.691361  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.691368  104530 round_trippers.go:580]     Audit-Id: 5d88fdfd-6f2f-44b1-a736-b6120a7e5a78
	I1212 00:37:06.691373  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.691390  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.691397  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.691405  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.691413  104530 round_trippers.go:580]     Content-Length: 210
	I1212 00:37:06.691425  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.691448  104530 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-859606-m03\" not found","reason":"NotFound","details":{"name":"multinode-859606-m03","kind":"nodes"},"code":404}
	I1212 00:37:06.691655  104530 pod_ready.go:97] node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
	I1212 00:37:06.691677  104530 pod_ready.go:81] duration metric: took 398.587524ms waiting for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:06.691686  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
	I1212 00:37:06.691693  104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:06.889174  104530 request.go:629] Waited for 197.369164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-prf7f
	I1212 00:37:06.889252  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-prf7f
	I1212 00:37:06.889259  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:06.889271  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:06.889280  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:06.893029  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:06.893047  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:06.893054  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:06 GMT
	I1212 00:37:06.893093  104530 round_trippers.go:580]     Audit-Id: 6846aa1b-42ae-4d5d-a1c7-384d5728840b
	I1212 00:37:06.893108  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:06.893115  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:06.893120  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:06.893128  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:06.893282  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-prf7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"8238226c-3d01-4b91-963b-7360206b8615","resourceVersion":"1182","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5929 chars]
	I1212 00:37:07.089197  104530 request.go:629] Waited for 195.360283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:07.089292  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:07.089298  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:07.089316  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:07.089322  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:07.091891  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:07.091927  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:07.091939  104530 round_trippers.go:580]     Audit-Id: 1d65f568-2c4a-42d4-bbba-8be4bdc48dd6
	I1212 00:37:07.091948  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:07.091961  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:07.091970  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:07.091979  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:07.091990  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:07 GMT
	I1212 00:37:07.092224  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:07.092619  104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-proxy-prf7f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:07.092640  104530 pod_ready.go:81] duration metric: took 400.940457ms waiting for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:07.092649  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-proxy-prf7f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:07.092655  104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:07.289085  104530 request.go:629] Waited for 196.361677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
	I1212 00:37:07.289150  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
	I1212 00:37:07.289155  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:07.289165  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:07.289173  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:07.292103  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:07.292128  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:07.292139  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:07.292147  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:07 GMT
	I1212 00:37:07.292160  104530 round_trippers.go:580]     Audit-Id: 4abc3eb7-8c82-4d87-b6ea-4f96f5e08936
	I1212 00:37:07.292172  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:07.292182  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:07.292187  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:07.292410  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q9h26","generateName":"kube-proxy-","namespace":"kube-system","uid":"7dd12033-bf81-4cd3-a412-3fe3211dc87b","resourceVersion":"978","creationTimestamp":"2023-12-12T00:31:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:31:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1212 00:37:07.489267  104530 request.go:629] Waited for 196.338554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
	I1212 00:37:07.489349  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
	I1212 00:37:07.489362  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:07.489373  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:07.489380  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:07.491859  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:07.491887  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:07.491897  104530 round_trippers.go:580]     Audit-Id: a3f5d27d-a101-460d-9f23-04a20e185c6f
	I1212 00:37:07.491907  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:07.491930  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:07.491943  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:07.491952  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:07.491959  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:07 GMT
	I1212 00:37:07.492124  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606-m02","uid":"4dead465-c032-4274-8147-a5a7d38c1bf5","resourceVersion":"1083","creationTimestamp":"2023-12-12T00:34:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_35_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:34:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3805 chars]
	I1212 00:37:07.492453  104530 pod_ready.go:92] pod "kube-proxy-q9h26" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:07.492469  104530 pod_ready.go:81] duration metric: took 399.80822ms waiting for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:07.492483  104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:07.688932  104530 request.go:629] Waited for 196.377404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
	I1212 00:37:07.689024  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
	I1212 00:37:07.689047  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:07.689062  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:07.689086  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:07.692055  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:07.692076  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:07.692083  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:07.692088  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:07.692094  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:07.692101  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:07.692109  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:07 GMT
	I1212 00:37:07.692118  104530 round_trippers.go:580]     Audit-Id: 8c31c43b-819b-4283-9d9f-35f04a7e36e9
	I1212 00:37:07.692273  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-859606","namespace":"kube-system","uid":"19a4264c-6ba5-44f4-8419-6f04d6224c92","resourceVersion":"1173","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.mirror":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.seen":"2023-12-12T00:29:55.207819594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I1212 00:37:07.889054  104530 request.go:629] Waited for 196.353748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:07.889117  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:07.889125  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:07.889137  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:07.889151  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:07.892167  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:07.892188  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:07.892194  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:07.892200  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:07.892226  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:07.892241  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:07.892250  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:07 GMT
	I1212 00:37:07.892257  104530 round_trippers.go:580]     Audit-Id: 9ee0618c-b043-4e2b-9e76-9d15b5ac7dc7
	I1212 00:37:07.892403  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:07.892746  104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-scheduler-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:07.892773  104530 pod_ready.go:81] duration metric: took 400.280036ms waiting for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:07.892785  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-scheduler-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
	I1212 00:37:07.892824  104530 pod_ready.go:38] duration metric: took 1.659742815s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:37:07.892857  104530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:37:07.904430  104530 command_runner.go:130] > -16
	I1212 00:37:07.904886  104530 ops.go:34] apiserver oom_adj: -16
	I1212 00:37:07.904899  104530 kubeadm.go:640] restartCluster took 22.709280238s
	I1212 00:37:07.904906  104530 kubeadm.go:406] StartCluster complete in 22.738318179s
	I1212 00:37:07.904921  104530 settings.go:142] acquiring lock: {Name:mk78e6f78084358f8434def169cefe6a62407a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:07.904985  104530 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:37:07.905654  104530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-80294/kubeconfig: {Name:mkf7cdfdedbee22114abcb4b16af22e84438f3f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:37:07.905860  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 00:37:07.906001  104530 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 00:37:07.909257  104530 out.go:177] * Enabled addons: 
	I1212 00:37:07.906240  104530 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:37:07.906246  104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:37:07.910860  104530 addons.go:502] enable addons completed in 4.865147ms: enabled=[]
	I1212 00:37:07.911128  104530 kapi.go:59] client config for multinode-859606: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.key", CAFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:37:07.911447  104530 round_trippers.go:463] GET https://192.168.39.40:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 00:37:07.911463  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:07.911471  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:07.911477  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:07.914264  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:07.914281  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:07.914291  104530 round_trippers.go:580]     Audit-Id: 48f5a121-1933-4a22-a355-5496f01879d3
	I1212 00:37:07.914299  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:07.914306  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:07.914317  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:07.914324  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:07.914335  104530 round_trippers.go:580]     Content-Length: 292
	I1212 00:37:07.914346  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:07 GMT
	I1212 00:37:07.914379  104530 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"75766566-fdf3-4c8a-abaa-ce458e02b129","resourceVersion":"1201","creationTimestamp":"2023-12-12T00:30:03Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 00:37:07.914516  104530 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-859606" context rescaled to 1 replicas
	I1212 00:37:07.914548  104530 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 00:37:07.917208  104530 out.go:177] * Verifying Kubernetes components...
	I1212 00:37:07.918721  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:37:08.110540  104530 command_runner.go:130] > apiVersion: v1
	I1212 00:37:08.110578  104530 command_runner.go:130] > data:
	I1212 00:37:08.110585  104530 command_runner.go:130] >   Corefile: |
	I1212 00:37:08.110591  104530 command_runner.go:130] >     .:53 {
	I1212 00:37:08.110596  104530 command_runner.go:130] >         log
	I1212 00:37:08.110602  104530 command_runner.go:130] >         errors
	I1212 00:37:08.110608  104530 command_runner.go:130] >         health {
	I1212 00:37:08.110614  104530 command_runner.go:130] >            lameduck 5s
	I1212 00:37:08.110620  104530 command_runner.go:130] >         }
	I1212 00:37:08.110627  104530 command_runner.go:130] >         ready
	I1212 00:37:08.110636  104530 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 00:37:08.110647  104530 command_runner.go:130] >            pods insecure
	I1212 00:37:08.110655  104530 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 00:37:08.110667  104530 command_runner.go:130] >            ttl 30
	I1212 00:37:08.110673  104530 command_runner.go:130] >         }
	I1212 00:37:08.110683  104530 command_runner.go:130] >         prometheus :9153
	I1212 00:37:08.110693  104530 command_runner.go:130] >         hosts {
	I1212 00:37:08.110705  104530 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1212 00:37:08.110714  104530 command_runner.go:130] >            fallthrough
	I1212 00:37:08.110724  104530 command_runner.go:130] >         }
	I1212 00:37:08.110732  104530 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 00:37:08.110737  104530 command_runner.go:130] >            max_concurrent 1000
	I1212 00:37:08.110743  104530 command_runner.go:130] >         }
	I1212 00:37:08.110748  104530 command_runner.go:130] >         cache 30
	I1212 00:37:08.110755  104530 command_runner.go:130] >         loop
	I1212 00:37:08.110761  104530 command_runner.go:130] >         reload
	I1212 00:37:08.110765  104530 command_runner.go:130] >         loadbalance
	I1212 00:37:08.110771  104530 command_runner.go:130] >     }
	I1212 00:37:08.110776  104530 command_runner.go:130] > kind: ConfigMap
	I1212 00:37:08.110782  104530 command_runner.go:130] > metadata:
	I1212 00:37:08.110787  104530 command_runner.go:130] >   creationTimestamp: "2023-12-12T00:30:03Z"
	I1212 00:37:08.110793  104530 command_runner.go:130] >   name: coredns
	I1212 00:37:08.110797  104530 command_runner.go:130] >   namespace: kube-system
	I1212 00:37:08.110804  104530 command_runner.go:130] >   resourceVersion: "407"
	I1212 00:37:08.110808  104530 command_runner.go:130] >   uid: 58df000b-e223-4f9f-a0ce-e6a345bc8b1e
	I1212 00:37:08.110871  104530 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 00:37:08.110910  104530 node_ready.go:35] waiting up to 6m0s for node "multinode-859606" to be "Ready" ...
	I1212 00:37:08.111108  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:08.111132  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:08.111144  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:08.111155  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:08.115592  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:08.115608  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:08.115615  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:08.115620  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:08 GMT
	I1212 00:37:08.115625  104530 round_trippers.go:580]     Audit-Id: 78e22458-8a23-48e3-9e27-578febb59a20
	I1212 00:37:08.115630  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:08.115635  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:08.115640  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:08.116255  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:08.289077  104530 request.go:629] Waited for 172.38964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:08.289150  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:08.289155  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:08.289163  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:08.289178  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:08.291767  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:08.291787  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:08.291797  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:08.291806  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:08 GMT
	I1212 00:37:08.291817  104530 round_trippers.go:580]     Audit-Id: bd808d02-17db-44e3-ae16-8f55b7323fe8
	I1212 00:37:08.291829  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:08.291841  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:08.291852  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:08.292123  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:08.793301  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:08.793331  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:08.793340  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:08.793346  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:08.796482  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:08.796514  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:08.796525  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:08.796533  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:08.796539  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:08.796544  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:08 GMT
	I1212 00:37:08.796549  104530 round_trippers.go:580]     Audit-Id: f551640f-6397-4f2f-ad7b-75e7a1ad4ab4
	I1212 00:37:08.796554  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:08.796722  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:09.293409  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:09.293442  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.293453  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.293461  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.296451  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:09.296469  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.296477  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.296482  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.296487  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.296496  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.296519  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.296527  104530 round_trippers.go:580]     Audit-Id: 2a8eef1a-1ec0-43cd-aba1-3dcd1603fa87
	I1212 00:37:09.296803  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
	I1212 00:37:09.793597  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:09.793626  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.793645  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.793664  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.796604  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:09.796624  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.796631  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.796636  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.796644  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.796649  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.796654  104530 round_trippers.go:580]     Audit-Id: 022e877a-18b3-43f9-ab6d-dff649dfc9f8
	I1212 00:37:09.796659  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.796949  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:09.797279  104530 node_ready.go:49] node "multinode-859606" has status "Ready":"True"
	I1212 00:37:09.797303  104530 node_ready.go:38] duration metric: took 1.686360286s waiting for node "multinode-859606" to be "Ready" ...
	I1212 00:37:09.797315  104530 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:37:09.797375  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:09.797386  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.797396  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.797406  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.801844  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:09.801867  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.801876  104530 round_trippers.go:580]     Audit-Id: 420ea970-9f48-457c-b0f7-7ec9ec1a588e
	I1212 00:37:09.801885  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.801894  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.801904  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.801927  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.801938  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.803506  104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1216"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83879 chars]
	I1212 00:37:09.806061  104530 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:09.806150  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:09.806162  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.806174  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.806184  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.808345  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:09.808361  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.808374  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.808383  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.808397  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.808405  104530 round_trippers.go:580]     Audit-Id: 9a9463c1-b358-492e-b922-367c6104207c
	I1212 00:37:09.808413  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.808422  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.808706  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:09.809215  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:09.809231  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.809238  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.809244  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.811292  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:09.811307  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.811316  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.811323  104530 round_trippers.go:580]     Audit-Id: f5ebccd1-dc5e-4d64-b27a-f59d7a10b2c3
	I1212 00:37:09.811331  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.811346  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.811359  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.811367  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.811572  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:09.812037  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:09.812052  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.812059  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.812065  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.813996  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:09.814010  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.814019  104530 round_trippers.go:580]     Audit-Id: e587521b-4190-4251-9713-9fe4cfdc8df1
	I1212 00:37:09.814027  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.814034  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.814043  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.814054  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.814063  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.814382  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:09.889078  104530 request.go:629] Waited for 74.284522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:09.889133  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:09.889139  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:09.889148  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:09.889154  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:09.892171  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:09.892194  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:09.892203  104530 round_trippers.go:580]     Audit-Id: 6c0b5759-dcf0-429c-88bf-c342959f386c
	I1212 00:37:09.892229  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:09.892241  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:09.892250  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:09.892269  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:09.892283  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:09 GMT
	I1212 00:37:09.892510  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:10.393716  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:10.393745  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:10.393755  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:10.393763  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:10.396859  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:10.396889  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:10.396899  104530 round_trippers.go:580]     Audit-Id: 5e8103b3-ec4e-4213-995d-24c751476571
	I1212 00:37:10.396907  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:10.396915  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:10.396923  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:10.396931  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:10.396939  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:10 GMT
	I1212 00:37:10.397178  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:10.397682  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:10.397698  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:10.397713  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:10.397722  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:10.399962  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:10.399981  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:10.399991  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:10 GMT
	I1212 00:37:10.399999  104530 round_trippers.go:580]     Audit-Id: 63def391-cbb3-428c-8bda-86f13b98f5c0
	I1212 00:37:10.400014  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:10.400026  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:10.400035  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:10.400046  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:10.400207  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:10.894000  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:10.894037  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:10.894048  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:10.894057  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:10.899308  104530 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:37:10.899334  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:10.899344  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:10.899355  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:10.899362  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:10.899369  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:10 GMT
	I1212 00:37:10.899377  104530 round_trippers.go:580]     Audit-Id: a6f54ff0-c318-428c-9e20-5afa1d44815f
	I1212 00:37:10.899383  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:10.899671  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:10.900196  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:10.900212  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:10.900219  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:10.900225  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:10.902531  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:10.902550  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:10.902560  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:10.902568  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:10.902576  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:10 GMT
	I1212 00:37:10.902586  104530 round_trippers.go:580]     Audit-Id: 72a3507b-3092-4d9e-bfa5-e84c0a5f5811
	I1212 00:37:10.902599  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:10.902610  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:10.902856  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:11.393521  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:11.393559  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:11.393569  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:11.393583  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:11.397962  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:11.398001  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:11.398012  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:11.398020  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:11.398028  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:11.398036  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:11 GMT
	I1212 00:37:11.398048  104530 round_trippers.go:580]     Audit-Id: 36163564-e6ac-4456-b495-9930bf8c7c95
	I1212 00:37:11.398056  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:11.399514  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:11.400077  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:11.400105  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:11.400115  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:11.400129  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:11.402841  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:11.402874  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:11.402895  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:11 GMT
	I1212 00:37:11.402903  104530 round_trippers.go:580]     Audit-Id: cf888e1f-3585-4d4c-b47a-d65c1b673f60
	I1212 00:37:11.402913  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:11.402923  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:11.402936  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:11.402944  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:11.403152  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:11.893890  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:11.893921  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:11.893930  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:11.893936  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:11.896885  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:11.896910  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:11.896920  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:11.896927  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:11.896934  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:11.896942  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:11.896949  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:11 GMT
	I1212 00:37:11.896956  104530 round_trippers.go:580]     Audit-Id: 560ccbf4-a93e-418b-97ef-b02d5b4a7c2a
	I1212 00:37:11.897291  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:11.897761  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:11.897778  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:11.897785  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:11.897791  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:11.900338  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:11.900381  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:11.900391  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:11 GMT
	I1212 00:37:11.900400  104530 round_trippers.go:580]     Audit-Id: 57fad163-7798-4518-b48a-afffca40ee66
	I1212 00:37:11.900408  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:11.900416  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:11.900428  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:11.900438  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:11.900617  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:11.900907  104530 pod_ready.go:102] pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace has status "Ready":"False"
	I1212 00:37:12.393289  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:12.393323  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:12.393337  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:12.393346  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:12.397658  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:12.397679  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:12.397686  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:12.397691  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:12 GMT
	I1212 00:37:12.397697  104530 round_trippers.go:580]     Audit-Id: 97d200a8-1144-4cfb-b7e7-ae622c67a09e
	I1212 00:37:12.397702  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:12.397707  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:12.397712  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:12.398001  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:12.398453  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:12.398468  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:12.398475  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:12.398480  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:12.401097  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:12.401115  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:12.401122  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:12 GMT
	I1212 00:37:12.401127  104530 round_trippers.go:580]     Audit-Id: 2a27c4e6-1e77-48fe-b9ff-18537a1ba771
	I1212 00:37:12.401135  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:12.401145  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:12.401153  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:12.401168  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:12.401283  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:12.893943  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:12.893969  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:12.893977  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:12.893984  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:12.897025  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:12.897047  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:12.897057  104530 round_trippers.go:580]     Audit-Id: 551ec886-a3c8-4be6-946b-459f81574f91
	I1212 00:37:12.897064  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:12.897071  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:12.897082  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:12.897091  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:12.897103  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:12 GMT
	I1212 00:37:12.897283  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:12.898253  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:12.898328  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:12.898343  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:12.898352  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:12.902125  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:12.902151  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:12.902161  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:12 GMT
	I1212 00:37:12.902171  104530 round_trippers.go:580]     Audit-Id: bb98bd7a-c04d-437d-aef6-72f5de2e6aac
	I1212 00:37:12.902182  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:12.902196  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:12.902214  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:12.902227  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:12.902594  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:13.393264  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:13.393294  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.393307  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.393317  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.396512  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:13.396534  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.396541  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.396546  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.396552  104530 round_trippers.go:580]     Audit-Id: 7f6212d1-aaf4-45df-a3b0-bb989bb1227a
	I1212 00:37:13.396560  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.396569  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.396578  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.396776  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1212 00:37:13.397248  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:13.397262  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.397270  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.397275  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.399404  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:13.399423  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.399433  104530 round_trippers.go:580]     Audit-Id: 77e44ea3-4125-4d4b-9450-f85475c1539a
	I1212 00:37:13.399440  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.399447  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.399454  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.399464  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.399471  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.399656  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:13.893292  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
	I1212 00:37:13.893317  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.893325  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.893331  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.896458  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:13.896475  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.896487  104530 round_trippers.go:580]     Audit-Id: ac46caca-dc3e-4d98-bda6-e430bb1fa8ae
	I1212 00:37:13.896494  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.896512  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.896519  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.896526  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.896534  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.897107  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1231","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I1212 00:37:13.897587  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:13.897603  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.897613  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.897621  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.900547  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:13.900568  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.900578  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.900586  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.900595  104530 round_trippers.go:580]     Audit-Id: e3dbde9a-cc4a-4762-867f-d9e9a410aef1
	I1212 00:37:13.900603  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.900611  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.900643  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.900901  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:13.901209  104530 pod_ready.go:92] pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:13.901226  104530 pod_ready.go:81] duration metric: took 4.09514334s waiting for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:13.901265  104530 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:13.901326  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-859606
	I1212 00:37:13.901336  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.901346  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.901356  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.903529  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:13.903549  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.903558  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.903566  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.903574  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.903582  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.903590  104530 round_trippers.go:580]     Audit-Id: d34bc26a-3f02-4be9-9af2-1ad0fadfbfa3
	I1212 00:37:13.903596  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.903967  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-859606","namespace":"kube-system","uid":"7d6ae370-b910-4aef-8729-e141b307ae17","resourceVersion":"1218","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.40:2379","kubernetes.io/config.hash":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.mirror":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.seen":"2023-12-12T00:30:03.645880014Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6072 chars]
	I1212 00:37:13.904430  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:13.904447  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.904454  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.904460  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.906383  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:13.906404  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.906413  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.906420  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.906429  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.906444  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.906453  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.906466  104530 round_trippers.go:580]     Audit-Id: 3f37632a-0e9f-4887-b36f-43d17d2e4134
	I1212 00:37:13.906620  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:13.906989  104530 pod_ready.go:92] pod "etcd-multinode-859606" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:13.907016  104530 pod_ready.go:81] duration metric: took 5.741099ms waiting for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:13.907041  104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:13.907100  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-859606
	I1212 00:37:13.907110  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.907118  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.907125  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.909221  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:13.909237  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.909245  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.909253  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.909260  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.909267  104530 round_trippers.go:580]     Audit-Id: 10369159-e62c-4dd4-8d77-2e82a59d784d
	I1212 00:37:13.909275  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.909287  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.909569  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-859606","namespace":"kube-system","uid":"0060efa7-dc06-439e-878f-b93b0e016326","resourceVersion":"1216","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.40:8443","kubernetes.io/config.hash":"6579d881f0553848179768317ac84853","kubernetes.io/config.mirror":"6579d881f0553848179768317ac84853","kubernetes.io/config.seen":"2023-12-12T00:29:55.207817853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7607 chars]
	I1212 00:37:13.909929  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:13.909943  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:13.909953  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:13.909961  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:13.911781  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:13.911800  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:13.911808  104530 round_trippers.go:580]     Audit-Id: c9f36dd0-0f04-4274-9537-6c203e1b93b8
	I1212 00:37:13.911817  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:13.911825  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:13.911833  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:13.911841  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:13.911848  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:13 GMT
	I1212 00:37:13.912152  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:13.912472  104530 pod_ready.go:92] pod "kube-apiserver-multinode-859606" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:13.912489  104530 pod_ready.go:81] duration metric: took 5.438494ms waiting for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:13.912497  104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:14.088914  104530 request.go:629] Waited for 176.352891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:14.089000  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:14.089007  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:14.089021  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:14.089037  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:14.092809  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:14.092835  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:14.092845  104530 round_trippers.go:580]     Audit-Id: 2c2f7c55-459e-4d01-a3f2-96b1b6cb8c8b
	I1212 00:37:14.092853  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:14.092861  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:14.092869  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:14.092876  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:14.092885  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:14 GMT
	I1212 00:37:14.093110  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:14.288948  104530 request.go:629] Waited for 195.377005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:14.289023  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:14.289032  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:14.289039  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:14.289053  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:14.291661  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:14.291688  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:14.291699  104530 round_trippers.go:580]     Audit-Id: 9a8ff279-becc-4981-a5d3-bab45d355f5b
	I1212 00:37:14.291709  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:14.291716  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:14.291721  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:14.291729  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:14.291734  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:14 GMT
	I1212 00:37:14.291936  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:14.489383  104530 request.go:629] Waited for 197.063929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:14.489461  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:14.489467  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:14.489475  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:14.489481  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:14.492357  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:14.492379  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:14.492386  104530 round_trippers.go:580]     Audit-Id: 12e5b7b5-fd32-4fe6-b1ff-eb7b4430f001
	I1212 00:37:14.492392  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:14.492397  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:14.492402  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:14.492407  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:14.492412  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:14 GMT
	I1212 00:37:14.492593  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:14.689101  104530 request.go:629] Waited for 196.091909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:14.689191  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:14.689198  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:14.689208  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:14.689218  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:14.691837  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:14.691858  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:14.691865  104530 round_trippers.go:580]     Audit-Id: 46cb3999-d30b-4074-ad3e-89d7533c5936
	I1212 00:37:14.691870  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:14.691875  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:14.691880  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:14.691885  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:14.691891  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:14 GMT
	I1212 00:37:14.692335  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:15.193200  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:15.193224  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:15.193232  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:15.193239  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:15.196981  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:15.197000  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:15.197006  104530 round_trippers.go:580]     Audit-Id: e9469ca3-765f-4b94-bad8-b62081cb2809
	I1212 00:37:15.197012  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:15.197034  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:15.197042  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:15.197049  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:15.197056  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:15 GMT
	I1212 00:37:15.197197  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:15.197635  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:15.197650  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:15.197657  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:15.197663  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:15.199909  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:15.199943  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:15.199952  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:15 GMT
	I1212 00:37:15.199959  104530 round_trippers.go:580]     Audit-Id: 55872ce3-0e31-4a29-bd8d-2fef53f7f5ad
	I1212 00:37:15.199967  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:15.199975  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:15.199983  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:15.199991  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:15.200167  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:15.693002  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:15.693027  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:15.693035  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:15.693041  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:15.695104  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:15.695127  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:15.695138  104530 round_trippers.go:580]     Audit-Id: e8dafcef-e232-4564-93ec-c99146d453a6
	I1212 00:37:15.695144  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:15.695152  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:15.695161  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:15.695170  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:15.695180  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:15 GMT
	I1212 00:37:15.695539  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:15.695954  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:15.695966  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:15.695974  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:15.695979  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:15.697613  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:15.697631  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:15.697640  104530 round_trippers.go:580]     Audit-Id: cd894f72-99d1-44a1-ba36-abb33011003a
	I1212 00:37:15.697649  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:15.697656  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:15.697661  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:15.697666  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:15.697671  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:15 GMT
	I1212 00:37:15.697922  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:16.193670  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:16.193698  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:16.193707  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:16.193712  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:16.196864  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:16.196891  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:16.196899  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:16.196904  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:16.196909  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:16 GMT
	I1212 00:37:16.196920  104530 round_trippers.go:580]     Audit-Id: 7e651bce-3845-4b66-8fb2-622327e8d40b
	I1212 00:37:16.196928  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:16.196936  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:16.197330  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:16.197766  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:16.197783  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:16.197790  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:16.197796  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:16.200198  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:16.200219  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:16.200225  104530 round_trippers.go:580]     Audit-Id: 9972e939-1cb4-4a78-8c0d-11a91b0625a8
	I1212 00:37:16.200230  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:16.200235  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:16.200241  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:16.200249  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:16.200254  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:16 GMT
	I1212 00:37:16.200367  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:16.200638  104530 pod_ready.go:102] pod "kube-controller-manager-multinode-859606" in "kube-system" namespace has status "Ready":"False"
	I1212 00:37:16.693040  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:16.693064  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:16.693073  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:16.693090  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:16.696324  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:16.696344  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:16.696354  104530 round_trippers.go:580]     Audit-Id: cfb4110b-a12c-4dd5-bb27-d5b38a9bdf99
	I1212 00:37:16.696363  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:16.696371  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:16.696380  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:16.696388  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:16.696393  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:16 GMT
	I1212 00:37:16.696757  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:16.697175  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:16.697186  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:16.697193  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:16.697199  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:16.699444  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:16.699466  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:16.699482  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:16.699489  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:16.699508  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:16.699514  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:16.699519  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:16 GMT
	I1212 00:37:16.699524  104530 round_trippers.go:580]     Audit-Id: 86f1d394-268f-4773-8a4f-65dfa15966b3
	I1212 00:37:16.699786  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:17.193535  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:17.193562  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:17.193571  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:17.193577  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:17.197001  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:17.197029  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:17.197039  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:17.197048  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:17.197056  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:17.197063  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:17 GMT
	I1212 00:37:17.197078  104530 round_trippers.go:580]     Audit-Id: 0039bd07-2809-441c-8a08-a005a1fb9474
	I1212 00:37:17.197086  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:17.197590  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:17.198195  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:17.198215  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:17.198227  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:17.198235  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:17.200561  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:17.200580  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:17.200594  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:17.200602  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:17.200608  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:17 GMT
	I1212 00:37:17.200615  104530 round_trippers.go:580]     Audit-Id: 7ca59026-3641-45f9-af2d-e56b2f15bbf4
	I1212 00:37:17.200623  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:17.200631  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:17.200818  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:17.693526  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:17.693559  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:17.693573  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:17.693581  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:17.696472  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:17.696503  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:17.696515  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:17 GMT
	I1212 00:37:17.696522  104530 round_trippers.go:580]     Audit-Id: f3b1cbfa-67ea-48ba-a602-3e51e26733e7
	I1212 00:37:17.696529  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:17.696537  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:17.696546  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:17.696556  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:17.696733  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:17.697203  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:17.697219  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:17.697230  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:17.697237  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:17.699246  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:17.699267  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:17.699274  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:17.699279  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:17.699284  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:17.699289  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:17.699303  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:17 GMT
	I1212 00:37:17.699311  104530 round_trippers.go:580]     Audit-Id: 537e896a-ad01-467d-8765-b18cc048639c
	I1212 00:37:17.699750  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:18.193513  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:18.193539  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:18.193547  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:18.193553  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:18.196642  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:18.196663  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:18.196670  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:18.196675  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:18.196680  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:18.196685  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:18.196690  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:18 GMT
	I1212 00:37:18.196695  104530 round_trippers.go:580]     Audit-Id: 01c5b2b7-3578-4302-9a5b-dbb75c34b269
	I1212 00:37:18.197211  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:18.197615  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:18.197626  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:18.197637  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:18.197645  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:18.199967  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:18.199986  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:18.199995  104530 round_trippers.go:580]     Audit-Id: b956bf4f-9b6c-4de6-87c0-84916a54c9aa
	I1212 00:37:18.200004  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:18.200012  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:18.200019  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:18.200027  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:18.200035  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:18 GMT
	I1212 00:37:18.200333  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:18.692979  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:18.693006  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:18.693014  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:18.693021  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:18.696863  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:18.696888  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:18.696895  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:18.696901  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:18 GMT
	I1212 00:37:18.696906  104530 round_trippers.go:580]     Audit-Id: d5c6e54d-aaea-4bf3-8a70-4dc0b57b264e
	I1212 00:37:18.696911  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:18.696916  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:18.696921  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:18.697946  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1212 00:37:18.698353  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:18.698366  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:18.698373  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:18.698381  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:18.700609  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:18.700629  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:18.700639  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:18.700647  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:18 GMT
	I1212 00:37:18.700655  104530 round_trippers.go:580]     Audit-Id: 0dde864e-ad38-4768-932a-24947963eeef
	I1212 00:37:18.700662  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:18.700669  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:18.700677  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:18.700840  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:18.701109  104530 pod_ready.go:102] pod "kube-controller-manager-multinode-859606" in "kube-system" namespace has status "Ready":"False"
	I1212 00:37:19.193617  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
	I1212 00:37:19.193643  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.193652  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.193658  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.197048  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:19.197071  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.197078  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.197083  104530 round_trippers.go:580]     Audit-Id: 20502bbb-60e6-48d0-b283-2696575d955f
	I1212 00:37:19.197090  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.197095  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.197100  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.197106  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.197298  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1240","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I1212 00:37:19.197741  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:19.197753  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.197760  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.197766  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.199854  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:19.199879  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.199889  104530 round_trippers.go:580]     Audit-Id: d3c788eb-c748-41e7-8b78-70c1417d3584
	I1212 00:37:19.199898  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.199907  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.199932  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.199946  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.199954  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.200107  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:19.200426  104530 pod_ready.go:92] pod "kube-controller-manager-multinode-859606" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:19.200447  104530 pod_ready.go:81] duration metric: took 5.287942632s waiting for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.200463  104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.200518  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6f6zz
	I1212 00:37:19.200527  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.200538  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.200547  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.203112  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:19.203134  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.203143  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.203151  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.203159  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.203168  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.203177  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.203185  104530 round_trippers.go:580]     Audit-Id: d4bddcbb-39f6-4c08-83da-2d4523904cda
	I1212 00:37:19.203320  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6f6zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d5931621-47fd-4f1a-bf46-813dd8352f00","resourceVersion":"1087","creationTimestamp":"2023-12-12T00:32:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:32:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I1212 00:37:19.203874  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m03
	I1212 00:37:19.203896  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.203907  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.203928  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.206014  104530 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1212 00:37:19.206033  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.206049  104530 round_trippers.go:580]     Content-Length: 210
	I1212 00:37:19.206061  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.206068  104530 round_trippers.go:580]     Audit-Id: 4aef6f8a-43a6-4188-a386-e5e2d3a1f6f3
	I1212 00:37:19.206082  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.206089  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.206097  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.206105  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.206236  104530 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-859606-m03\" not found","reason":"NotFound","details":{"name":"multinode-859606-m03","kind":"nodes"},"code":404}
	I1212 00:37:19.206386  104530 pod_ready.go:97] node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
	I1212 00:37:19.206408  104530 pod_ready.go:81] duration metric: took 5.937337ms waiting for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
	E1212 00:37:19.206423  104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
	I1212 00:37:19.206431  104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.206494  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-prf7f
	I1212 00:37:19.206504  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.206515  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.206527  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.208365  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:19.208385  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.208394  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.208403  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.208418  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.208426  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.208437  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.208447  104530 round_trippers.go:580]     Audit-Id: c0033a2c-2985-4a9c-95d1-b824f5e20713
	I1212 00:37:19.208684  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-prf7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"8238226c-3d01-4b91-963b-7360206b8615","resourceVersion":"1206","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I1212 00:37:19.209132  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:19.209150  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.209164  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.209177  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.210970  104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:37:19.210988  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.210997  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.211006  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.211020  104530 round_trippers.go:580]     Audit-Id: 396956f0-54b8-4778-ab7c-a37fe9b33b2e
	I1212 00:37:19.211027  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.211041  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.211052  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.211256  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:19.211606  104530 pod_ready.go:92] pod "kube-proxy-prf7f" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:19.211630  104530 pod_ready.go:81] duration metric: took 5.187099ms waiting for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.211641  104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.288985  104530 request.go:629] Waited for 77.268211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
	I1212 00:37:19.289047  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
	I1212 00:37:19.289060  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.289074  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.289085  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.291884  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:19.291923  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.291934  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.291943  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.291954  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.291962  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.291969  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.291984  104530 round_trippers.go:580]     Audit-Id: f9222a80-11b7-4070-b9c2-ea9633cc9696
	I1212 00:37:19.292162  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q9h26","generateName":"kube-proxy-","namespace":"kube-system","uid":"7dd12033-bf81-4cd3-a412-3fe3211dc87b","resourceVersion":"978","creationTimestamp":"2023-12-12T00:31:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:31:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1212 00:37:19.489027  104530 request.go:629] Waited for 196.400938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
	I1212 00:37:19.489092  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
	I1212 00:37:19.489097  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.489104  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.489111  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.492013  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:19.492033  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.492040  104530 round_trippers.go:580]     Audit-Id: 78f39b63-2309-4f9b-bec7-2fb901d235db
	I1212 00:37:19.492045  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.492051  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.492060  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.492069  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.492078  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.492270  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606-m02","uid":"4dead465-c032-4274-8147-a5a7d38c1bf5","resourceVersion":"1083","creationTimestamp":"2023-12-12T00:34:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_35_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:34:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3805 chars]
	I1212 00:37:19.492641  104530 pod_ready.go:92] pod "kube-proxy-q9h26" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:19.492662  104530 pod_ready.go:81] duration metric: took 281.010934ms waiting for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.492672  104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.688873  104530 request.go:629] Waited for 196.137127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
	I1212 00:37:19.688950  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
	I1212 00:37:19.688955  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.688963  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.688969  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.691734  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:19.691755  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.691762  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.691767  104530 round_trippers.go:580]     Audit-Id: f7675bf4-e31a-4738-b42f-be7859177fe3
	I1212 00:37:19.691772  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.691777  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.691783  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.691788  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.692171  104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-859606","namespace":"kube-system","uid":"19a4264c-6ba5-44f4-8419-6f04d6224c92","resourceVersion":"1215","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.mirror":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.seen":"2023-12-12T00:29:55.207819594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I1212 00:37:19.888908  104530 request.go:629] Waited for 196.296036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:19.888977  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
	I1212 00:37:19.888982  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.888989  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.888996  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.891677  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:19.891697  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.891704  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.891710  104530 round_trippers.go:580]     Audit-Id: 05fc06a3-8feb-45d4-9823-a6b2852345e9
	I1212 00:37:19.891723  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.891735  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.891745  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.891754  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.892212  104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
	I1212 00:37:19.892531  104530 pod_ready.go:92] pod "kube-scheduler-multinode-859606" in "kube-system" namespace has status "Ready":"True"
	I1212 00:37:19.892549  104530 pod_ready.go:81] duration metric: took 399.870057ms waiting for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
	I1212 00:37:19.892566  104530 pod_ready.go:38] duration metric: took 10.095238343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:37:19.892585  104530 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:37:19.892637  104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:37:19.905440  104530 command_runner.go:130] > 1800
	I1212 00:37:19.905932  104530 api_server.go:72] duration metric: took 11.991353984s to wait for apiserver process to appear ...
	I1212 00:37:19.905947  104530 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:37:19.905967  104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:37:19.912545  104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 200:
	ok
	I1212 00:37:19.912608  104530 round_trippers.go:463] GET https://192.168.39.40:8443/version
	I1212 00:37:19.912620  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:19.912630  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:19.912637  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:19.913604  104530 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 00:37:19.913622  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:19.913631  104530 round_trippers.go:580]     Audit-Id: a90e5deb-2922-43fe-bcfb-bbd1e68986eb
	I1212 00:37:19.913640  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:19.913655  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:19.913663  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:19.913674  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:19.913683  104530 round_trippers.go:580]     Content-Length: 264
	I1212 00:37:19.913691  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:19 GMT
	I1212 00:37:19.913714  104530 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 00:37:19.913766  104530 api_server.go:141] control plane version: v1.28.4
	I1212 00:37:19.913784  104530 api_server.go:131] duration metric: took 7.830198ms to wait for apiserver health ...
	I1212 00:37:19.913794  104530 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:37:20.089251  104530 request.go:629] Waited for 175.374729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:20.089344  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:20.089351  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:20.089363  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:20.089370  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:20.093974  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:20.094001  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:20.094009  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:20.094016  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:20 GMT
	I1212 00:37:20.094024  104530 round_trippers.go:580]     Audit-Id: a00499e6-5aa6-4108-b030-bb102abafbdd
	I1212 00:37:20.094032  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:20.094055  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:20.094065  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:20.095252  104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1231","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83341 chars]
	I1212 00:37:20.098784  104530 system_pods.go:59] 12 kube-system pods found
	I1212 00:37:20.098809  104530 system_pods.go:61] "coredns-5dd5756b68-t9jz8" [3605a003-e8d6-46b2-8fe7-f45647656622] Running
	I1212 00:37:20.098814  104530 system_pods.go:61] "etcd-multinode-859606" [7d6ae370-b910-4aef-8729-e141b307ae17] Running
	I1212 00:37:20.098820  104530 system_pods.go:61] "kindnet-9slwc" [6b37daf7-e9d5-47c5-ae94-01150282b6cf] Running
	I1212 00:37:20.098826  104530 system_pods.go:61] "kindnet-d4q52" [35ed1c56-7487-4b6d-ab1f-b5cfe6502739] Running
	I1212 00:37:20.098832  104530 system_pods.go:61] "kindnet-x2g5d" [c1dab004-2557-4b4f-975b-bd0b5a8f4d90] Running
	I1212 00:37:20.098839  104530 system_pods.go:61] "kube-apiserver-multinode-859606" [0060efa7-dc06-439e-878f-b93b0e016326] Running
	I1212 00:37:20.098853  104530 system_pods.go:61] "kube-controller-manager-multinode-859606" [901bf3ab-f34d-42c8-b1da-d5431ae0219f] Running
	I1212 00:37:20.098864  104530 system_pods.go:61] "kube-proxy-6f6zz" [d5931621-47fd-4f1a-bf46-813dd8352f00] Running
	I1212 00:37:20.098870  104530 system_pods.go:61] "kube-proxy-prf7f" [8238226c-3d01-4b91-963b-7360206b8615] Running
	I1212 00:37:20.098877  104530 system_pods.go:61] "kube-proxy-q9h26" [7dd12033-bf81-4cd3-a412-3fe3211dc87b] Running
	I1212 00:37:20.098887  104530 system_pods.go:61] "kube-scheduler-multinode-859606" [19a4264c-6ba5-44f4-8419-6f04d6224c92] Running
	I1212 00:37:20.098896  104530 system_pods.go:61] "storage-provisioner" [a021db21-b335-4c05-8e32-808642dbb72e] Running
	I1212 00:37:20.098906  104530 system_pods.go:74] duration metric: took 185.102197ms to wait for pod list to return data ...
	I1212 00:37:20.098917  104530 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:37:20.289369  104530 request.go:629] Waited for 190.371344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:37:20.289426  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:37:20.289431  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:20.289439  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:20.289445  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:20.292334  104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:37:20.292356  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:20.292380  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:20.292392  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:20.292406  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:20.292429  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:20.292440  104530 round_trippers.go:580]     Content-Length: 262
	I1212 00:37:20.292445  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:20 GMT
	I1212 00:37:20.292452  104530 round_trippers.go:580]     Audit-Id: fcc27580-a669-4f4d-a44c-e2fc099e94e8
	I1212 00:37:20.292478  104530 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b7226be9-2d9e-41aa-a29f-25b2631acf72","resourceVersion":"337","creationTimestamp":"2023-12-12T00:30:16Z"}}]}
	I1212 00:37:20.292693  104530 default_sa.go:45] found service account: "default"
	I1212 00:37:20.292714  104530 default_sa.go:55] duration metric: took 193.787623ms for default service account to be created ...
	I1212 00:37:20.292723  104530 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:37:20.489190  104530 request.go:629] Waited for 196.390334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:20.489259  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I1212 00:37:20.489264  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:20.489281  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:20.489299  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:20.493457  104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:37:20.493482  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:20.493501  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:20.493511  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:20.493519  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:20.493534  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:20.493541  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:20 GMT
	I1212 00:37:20.493545  104530 round_trippers.go:580]     Audit-Id: b5e27102-8247-4af2-81d0-d5c782e978b9
	I1212 00:37:20.495018  104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1231","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83341 chars]
	I1212 00:37:20.497464  104530 system_pods.go:86] 12 kube-system pods found
	I1212 00:37:20.497487  104530 system_pods.go:89] "coredns-5dd5756b68-t9jz8" [3605a003-e8d6-46b2-8fe7-f45647656622] Running
	I1212 00:37:20.497492  104530 system_pods.go:89] "etcd-multinode-859606" [7d6ae370-b910-4aef-8729-e141b307ae17] Running
	I1212 00:37:20.497498  104530 system_pods.go:89] "kindnet-9slwc" [6b37daf7-e9d5-47c5-ae94-01150282b6cf] Running
	I1212 00:37:20.497505  104530 system_pods.go:89] "kindnet-d4q52" [35ed1c56-7487-4b6d-ab1f-b5cfe6502739] Running
	I1212 00:37:20.497520  104530 system_pods.go:89] "kindnet-x2g5d" [c1dab004-2557-4b4f-975b-bd0b5a8f4d90] Running
	I1212 00:37:20.497528  104530 system_pods.go:89] "kube-apiserver-multinode-859606" [0060efa7-dc06-439e-878f-b93b0e016326] Running
	I1212 00:37:20.497543  104530 system_pods.go:89] "kube-controller-manager-multinode-859606" [901bf3ab-f34d-42c8-b1da-d5431ae0219f] Running
	I1212 00:37:20.497550  104530 system_pods.go:89] "kube-proxy-6f6zz" [d5931621-47fd-4f1a-bf46-813dd8352f00] Running
	I1212 00:37:20.497554  104530 system_pods.go:89] "kube-proxy-prf7f" [8238226c-3d01-4b91-963b-7360206b8615] Running
	I1212 00:37:20.497560  104530 system_pods.go:89] "kube-proxy-q9h26" [7dd12033-bf81-4cd3-a412-3fe3211dc87b] Running
	I1212 00:37:20.497565  104530 system_pods.go:89] "kube-scheduler-multinode-859606" [19a4264c-6ba5-44f4-8419-6f04d6224c92] Running
	I1212 00:37:20.497571  104530 system_pods.go:89] "storage-provisioner" [a021db21-b335-4c05-8e32-808642dbb72e] Running
	I1212 00:37:20.497579  104530 system_pods.go:126] duration metric: took 204.845476ms to wait for k8s-apps to be running ...
	I1212 00:37:20.497589  104530 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:37:20.497645  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:37:20.514001  104530 system_svc.go:56] duration metric: took 16.405003ms WaitForService to wait for kubelet.
	I1212 00:37:20.514018  104530 kubeadm.go:581] duration metric: took 12.599444535s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 00:37:20.514036  104530 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:37:20.689493  104530 request.go:629] Waited for 175.357994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes
	I1212 00:37:20.689560  104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes
	I1212 00:37:20.689567  104530 round_trippers.go:469] Request Headers:
	I1212 00:37:20.689580  104530 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:37:20.689590  104530 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:37:20.692705  104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:37:20.692723  104530 round_trippers.go:577] Response Headers:
	I1212 00:37:20.692730  104530 round_trippers.go:580]     Audit-Id: 1464068b-baf2-48bc-ba66-087651c82097
	I1212 00:37:20.692735  104530 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 00:37:20.692740  104530 round_trippers.go:580]     Content-Type: application/json
	I1212 00:37:20.692752  104530 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
	I1212 00:37:20.692766  104530 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
	I1212 00:37:20.692774  104530 round_trippers.go:580]     Date: Tue, 12 Dec 2023 00:37:20 GMT
	I1212 00:37:20.693088  104530 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10008 chars]
	I1212 00:37:20.693685  104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 00:37:20.693709  104530 node_conditions.go:123] node cpu capacity is 2
	I1212 00:37:20.693723  104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 00:37:20.693735  104530 node_conditions.go:123] node cpu capacity is 2
	I1212 00:37:20.693741  104530 node_conditions.go:105] duration metric: took 179.70085ms to run NodePressure ...
	I1212 00:37:20.693757  104530 start.go:228] waiting for startup goroutines ...
	I1212 00:37:20.693768  104530 start.go:233] waiting for cluster config update ...
	I1212 00:37:20.693780  104530 start.go:242] writing updated cluster config ...
	I1212 00:37:20.694346  104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:37:20.694464  104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
	I1212 00:37:20.697216  104530 out.go:177] * Starting worker node multinode-859606-m02 in cluster multinode-859606
	I1212 00:37:20.698351  104530 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 00:37:20.698370  104530 cache.go:56] Caching tarball of preloaded images
	I1212 00:37:20.698473  104530 preload.go:174] Found /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 00:37:20.698483  104530 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 00:37:20.698567  104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
	I1212 00:37:20.698742  104530 start.go:365] acquiring machines lock for multinode-859606-m02: {Name:mk381e91746c2e5b8a4620fe3fd447d80375e413 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:37:20.698785  104530 start.go:369] acquired machines lock for "multinode-859606-m02" in 25.605µs
	I1212 00:37:20.698798  104530 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:37:20.698805  104530 fix.go:54] fixHost starting: m02
	I1212 00:37:20.699049  104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:37:20.699070  104530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:37:20.713769  104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39383
	I1212 00:37:20.714173  104530 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:37:20.714616  104530 main.go:141] libmachine: Using API Version  1
	I1212 00:37:20.714644  104530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:37:20.714957  104530 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:37:20.715148  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:20.715321  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetState
	I1212 00:37:20.716762  104530 fix.go:102] recreateIfNeeded on multinode-859606-m02: state=Stopped err=<nil>
	I1212 00:37:20.716788  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	W1212 00:37:20.716969  104530 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 00:37:20.718972  104530 out.go:177] * Restarting existing kvm2 VM for "multinode-859606-m02" ...
	I1212 00:37:20.720351  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .Start
	I1212 00:37:20.720531  104530 main.go:141] libmachine: (multinode-859606-m02) Ensuring networks are active...
	I1212 00:37:20.721224  104530 main.go:141] libmachine: (multinode-859606-m02) Ensuring network default is active
	I1212 00:37:20.721668  104530 main.go:141] libmachine: (multinode-859606-m02) Ensuring network mk-multinode-859606 is active
	I1212 00:37:20.722168  104530 main.go:141] libmachine: (multinode-859606-m02) Getting domain xml...
	I1212 00:37:20.722963  104530 main.go:141] libmachine: (multinode-859606-m02) Creating domain...
	I1212 00:37:21.957474  104530 main.go:141] libmachine: (multinode-859606-m02) Waiting to get IP...
	I1212 00:37:21.958335  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:21.958740  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:21.958796  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:21.958699  104802 retry.go:31] will retry after 282.895442ms: waiting for machine to come up
	I1212 00:37:22.243280  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:22.243745  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:22.243773  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:22.243699  104802 retry.go:31] will retry after 387.587998ms: waiting for machine to come up
	I1212 00:37:22.633350  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:22.633841  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:22.633875  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:22.633770  104802 retry.go:31] will retry after 299.810803ms: waiting for machine to come up
	I1212 00:37:22.935179  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:22.935627  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:22.935662  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:22.935567  104802 retry.go:31] will retry after 368.460834ms: waiting for machine to come up
	I1212 00:37:23.306050  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:23.306531  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:23.306554  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:23.306486  104802 retry.go:31] will retry after 567.761569ms: waiting for machine to come up
	I1212 00:37:23.876187  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:23.876658  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:23.876692  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:23.876603  104802 retry.go:31] will retry after 673.685642ms: waiting for machine to come up
	I1212 00:37:24.551471  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:24.551879  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:24.551932  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:24.551825  104802 retry.go:31] will retry after 837.913991ms: waiting for machine to come up
	I1212 00:37:25.391781  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:25.392075  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:25.392106  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:25.392038  104802 retry.go:31] will retry after 1.006695939s: waiting for machine to come up
	I1212 00:37:26.400658  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:26.401136  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:26.401168  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:26.401063  104802 retry.go:31] will retry after 1.662996951s: waiting for machine to come up
	I1212 00:37:28.065937  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:28.066407  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:28.066429  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:28.066363  104802 retry.go:31] will retry after 2.272536479s: waiting for machine to come up
	I1212 00:37:30.341875  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:30.342336  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:30.342380  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:30.342274  104802 retry.go:31] will retry after 1.895134507s: waiting for machine to come up
	I1212 00:37:32.239315  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:32.239701  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:32.239736  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:32.239637  104802 retry.go:31] will retry after 2.566822425s: waiting for machine to come up
	I1212 00:37:34.808939  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:34.809382  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
	I1212 00:37:34.809406  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:34.809339  104802 retry.go:31] will retry after 4.439419543s: waiting for machine to come up
	I1212 00:37:39.249907  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.250290  104530 main.go:141] libmachine: (multinode-859606-m02) Found IP for machine: 192.168.39.65
	I1212 00:37:39.250320  104530 main.go:141] libmachine: (multinode-859606-m02) Reserving static IP address...
	I1212 00:37:39.250342  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has current primary IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.250818  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "multinode-859606-m02", mac: "52:54:00:ea:e9:13", ip: "192.168.39.65"} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.250858  104530 main.go:141] libmachine: (multinode-859606-m02) Reserved static IP address: 192.168.39.65
	I1212 00:37:39.250878  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | skip adding static IP to network mk-multinode-859606 - found existing host DHCP lease matching {name: "multinode-859606-m02", mac: "52:54:00:ea:e9:13", ip: "192.168.39.65"}
	I1212 00:37:39.250889  104530 main.go:141] libmachine: (multinode-859606-m02) Waiting for SSH to be available...
	I1212 00:37:39.250909  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | Getting to WaitForSSH function...
	I1212 00:37:39.253228  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.253705  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.253733  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.253879  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | Using SSH client type: external
	I1212 00:37:39.253906  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa (-rw-------)
	I1212 00:37:39.253933  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:37:39.253947  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | About to run SSH command:
	I1212 00:37:39.253968  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | exit 0
	I1212 00:37:39.347723  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | SSH cmd err, output: <nil>: 
	I1212 00:37:39.348137  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetConfigRaw
	I1212 00:37:39.348792  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
	I1212 00:37:39.351240  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.351592  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.351628  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.351860  104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
	I1212 00:37:39.352092  104530 machine.go:88] provisioning docker machine ...
	I1212 00:37:39.352113  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:39.352303  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetMachineName
	I1212 00:37:39.352445  104530 buildroot.go:166] provisioning hostname "multinode-859606-m02"
	I1212 00:37:39.352470  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetMachineName
	I1212 00:37:39.352609  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:39.354957  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.355309  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.355339  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.355537  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:39.355716  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.355867  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.355992  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:39.356149  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:37:39.356637  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 00:37:39.356656  104530 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-859606-m02 && echo "multinode-859606-m02" | sudo tee /etc/hostname
	I1212 00:37:39.502532  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-859606-m02
	
	I1212 00:37:39.502568  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:39.505328  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.505789  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.505823  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.505999  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:39.506231  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.506373  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.506531  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:39.506708  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:37:39.507067  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 00:37:39.507085  104530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-859606-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-859606-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-859606-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:37:39.645009  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:37:39.645036  104530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17764-80294/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-80294/.minikube}
	I1212 00:37:39.645051  104530 buildroot.go:174] setting up certificates
	I1212 00:37:39.645059  104530 provision.go:83] configureAuth start
	I1212 00:37:39.645068  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetMachineName
	I1212 00:37:39.645319  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
	I1212 00:37:39.648244  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.648695  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.648726  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.648891  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:39.651280  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.651603  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.651634  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.651775  104530 provision.go:138] copyHostCerts
	I1212 00:37:39.651810  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
	I1212 00:37:39.651849  104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem, removing ...
	I1212 00:37:39.651862  104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
	I1212 00:37:39.651958  104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem (1078 bytes)
	I1212 00:37:39.652055  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
	I1212 00:37:39.652080  104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem, removing ...
	I1212 00:37:39.652087  104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
	I1212 00:37:39.652126  104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem (1123 bytes)
	I1212 00:37:39.652240  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
	I1212 00:37:39.652270  104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem, removing ...
	I1212 00:37:39.652278  104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
	I1212 00:37:39.652320  104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem (1679 bytes)
	I1212 00:37:39.652413  104530 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem org=jenkins.multinode-859606-m02 san=[192.168.39.65 192.168.39.65 localhost 127.0.0.1 minikube multinode-859606-m02]
	I1212 00:37:39.786080  104530 provision.go:172] copyRemoteCerts
	I1212 00:37:39.786162  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:37:39.786193  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:39.788840  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.789107  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.789147  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.789364  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:39.789559  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.789730  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:39.789868  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
	I1212 00:37:39.884832  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:37:39.884920  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:37:39.908744  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:37:39.908817  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 00:37:39.932380  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:37:39.932446  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:37:39.956816  104530 provision.go:86] duration metric: configureAuth took 311.743914ms
	I1212 00:37:39.956853  104530 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:37:39.957091  104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:37:39.957118  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:39.957389  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:39.960094  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.960494  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:39.960529  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:39.960669  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:39.960847  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.961048  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:39.961181  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:39.961346  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:37:39.961722  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 00:37:39.961740  104530 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 00:37:40.093977  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 00:37:40.094012  104530 buildroot.go:70] root file system type: tmpfs
	I1212 00:37:40.094174  104530 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 00:37:40.094208  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:40.097149  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:40.097507  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:40.097534  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:40.097760  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:40.098013  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:40.098210  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:40.098318  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:40.098507  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:37:40.098848  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 00:37:40.098916  104530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.40"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 00:37:40.241326  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.40
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 00:37:40.241355  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:40.243925  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:40.244271  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:40.244296  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:40.244504  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:40.244693  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:40.244875  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:40.245023  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:40.245173  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:37:40.245547  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 00:37:40.245565  104530 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 00:37:41.126250  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 00:37:41.126280  104530 machine.go:91] provisioned docker machine in 1.774172725s
	I1212 00:37:41.126296  104530 start.go:300] post-start starting for "multinode-859606-m02" (driver="kvm2")
	I1212 00:37:41.126310  104530 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:37:41.126329  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:41.126679  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:37:41.126707  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:41.129504  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.129833  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:41.129866  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.130073  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:41.130301  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:41.130478  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:41.130687  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
	I1212 00:37:41.225898  104530 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:37:41.230065  104530 command_runner.go:130] > NAME=Buildroot
	I1212 00:37:41.230089  104530 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 00:37:41.230096  104530 command_runner.go:130] > ID=buildroot
	I1212 00:37:41.230109  104530 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 00:37:41.230117  104530 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 00:37:41.230251  104530 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 00:37:41.230275  104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/addons for local assets ...
	I1212 00:37:41.230351  104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/files for local assets ...
	I1212 00:37:41.230452  104530 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> 876092.pem in /etc/ssl/certs
	I1212 00:37:41.230466  104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> /etc/ssl/certs/876092.pem
	I1212 00:37:41.230586  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:37:41.239133  104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /etc/ssl/certs/876092.pem (1708 bytes)
	I1212 00:37:41.262487  104530 start.go:303] post-start completed in 136.174154ms
	I1212 00:37:41.262513  104530 fix.go:56] fixHost completed within 20.563707335s
	I1212 00:37:41.262539  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:41.265240  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.265538  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:41.265572  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.265778  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:41.265950  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:41.266126  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:41.266310  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:41.266489  104530 main.go:141] libmachine: Using SSH client type: native
	I1212 00:37:41.266856  104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1212 00:37:41.266871  104530 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 00:37:41.396610  104530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702341461.344204788
	
	I1212 00:37:41.396638  104530 fix.go:206] guest clock: 1702341461.344204788
	I1212 00:37:41.396649  104530 fix.go:219] Guest: 2023-12-12 00:37:41.344204788 +0000 UTC Remote: 2023-12-12 00:37:41.262521516 +0000 UTC m=+81.745766897 (delta=81.683272ms)
	I1212 00:37:41.396669  104530 fix.go:190] guest clock delta is within tolerance: 81.683272ms
	I1212 00:37:41.396676  104530 start.go:83] releasing machines lock for "multinode-859606-m02", held for 20.697881438s
	I1212 00:37:41.396707  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:41.396998  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
	I1212 00:37:41.399794  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.400251  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:41.400284  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.402301  104530 out.go:177] * Found network options:
	I1212 00:37:41.403745  104530 out.go:177]   - NO_PROXY=192.168.39.40
	W1212 00:37:41.404991  104530 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:37:41.405014  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:41.405584  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:41.405757  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:37:41.405832  104530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:37:41.405875  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	W1212 00:37:41.405953  104530 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:37:41.406034  104530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:37:41.406061  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:37:41.408298  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.408470  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.408704  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:41.408734  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.408860  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:41.408890  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:37:41.408931  104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:37:41.409042  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:37:41.409170  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:41.409276  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:37:41.409448  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:41.409487  104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:37:41.409614  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
	I1212 00:37:41.409611  104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
	I1212 00:37:41.504163  104530 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:37:41.504453  104530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:37:41.504528  104530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:37:41.528894  104530 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:37:41.528955  104530 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 00:37:41.529013  104530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:37:41.529030  104530 start.go:475] detecting cgroup driver to use...
	I1212 00:37:41.529132  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:37:41.549871  104530 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 00:37:41.549952  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 00:37:41.559926  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:37:41.569604  104530 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:37:41.569669  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:37:41.578872  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:37:41.588052  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:37:41.597753  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:37:41.607940  104530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:37:41.618063  104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:37:41.628111  104530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:37:41.637202  104530 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:37:41.637321  104530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:37:41.645675  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:37:41.756330  104530 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:37:41.774116  104530 start.go:475] detecting cgroup driver to use...
	I1212 00:37:41.774203  104530 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 00:37:41.790254  104530 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 00:37:41.790292  104530 command_runner.go:130] > [Unit]
	I1212 00:37:41.790304  104530 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 00:37:41.790313  104530 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 00:37:41.790321  104530 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 00:37:41.790329  104530 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 00:37:41.790357  104530 command_runner.go:130] > StartLimitBurst=3
	I1212 00:37:41.790372  104530 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 00:37:41.790377  104530 command_runner.go:130] > [Service]
	I1212 00:37:41.790387  104530 command_runner.go:130] > Type=notify
	I1212 00:37:41.790391  104530 command_runner.go:130] > Restart=on-failure
	I1212 00:37:41.790396  104530 command_runner.go:130] > Environment=NO_PROXY=192.168.39.40
	I1212 00:37:41.790406  104530 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 00:37:41.790421  104530 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 00:37:41.790437  104530 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 00:37:41.790453  104530 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 00:37:41.790463  104530 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 00:37:41.790474  104530 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 00:37:41.790485  104530 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 00:37:41.790548  104530 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 00:37:41.790571  104530 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 00:37:41.790578  104530 command_runner.go:130] > ExecStart=
	I1212 00:37:41.790612  104530 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1212 00:37:41.790624  104530 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 00:37:41.790640  104530 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 00:37:41.790650  104530 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 00:37:41.790654  104530 command_runner.go:130] > LimitNOFILE=infinity
	I1212 00:37:41.790662  104530 command_runner.go:130] > LimitNPROC=infinity
	I1212 00:37:41.790671  104530 command_runner.go:130] > LimitCORE=infinity
	I1212 00:37:41.790681  104530 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 00:37:41.790693  104530 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 00:37:41.790703  104530 command_runner.go:130] > TasksMax=infinity
	I1212 00:37:41.790718  104530 command_runner.go:130] > TimeoutStartSec=0
	I1212 00:37:41.790729  104530 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 00:37:41.790740  104530 command_runner.go:130] > Delegate=yes
	I1212 00:37:41.790749  104530 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 00:37:41.790764  104530 command_runner.go:130] > KillMode=process
	I1212 00:37:41.790774  104530 command_runner.go:130] > [Install]
	I1212 00:37:41.790781  104530 command_runner.go:130] > WantedBy=multi-user.target
	I1212 00:37:41.790852  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:37:41.807010  104530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:37:41.831315  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:37:41.843702  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:37:41.855452  104530 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:37:41.887392  104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:37:41.900115  104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:37:41.917122  104530 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 00:37:41.917212  104530 ssh_runner.go:195] Run: which cri-dockerd
	I1212 00:37:41.920948  104530 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 00:37:41.921049  104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 00:37:41.929638  104530 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 00:37:41.945850  104530 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 00:37:42.053680  104530 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 00:37:42.164852  104530 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 00:37:42.164906  104530 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 00:37:42.181956  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:37:42.292269  104530 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 00:37:43.762922  104530 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.47061306s)
	I1212 00:37:43.762999  104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 00:37:43.866143  104530 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 00:37:43.974469  104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 00:37:44.089805  104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:37:44.189760  104530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 00:37:44.203372  104530 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1212 00:37:44.203469  104530 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1212 00:37:44.213697  104530 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 00:37:32 UTC, ends at Tue 2023-12-12 00:37:44 UTC. --
	I1212 00:37:44.213720  104530 command_runner.go:130] > Dec 12 00:37:33 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1212 00:37:44.213727  104530 command_runner.go:130] > Dec 12 00:37:33 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 00:37:44.213734  104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: cri-docker.socket: Succeeded.
	I1212 00:37:44.213740  104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Closed CRI Docker Socket for the API.
	I1212 00:37:44.213747  104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 00:37:44.213755  104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1212 00:37:44.213761  104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 00:37:44.213770  104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: cri-docker.socket: Succeeded.
	I1212 00:37:44.213778  104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Closed CRI Docker Socket for the API.
	I1212 00:37:44.213786  104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 00:37:44.213794  104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1212 00:37:44.213801  104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 00:37:44.213814  104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
	I1212 00:37:44.213828  104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
	I1212 00:37:44.213842  104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 00:37:44.213860  104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Starting CRI Docker Socket for the API.
	I1212 00:37:44.213874  104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 00:37:44.213887  104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
	I1212 00:37:44.213899  104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
	I1212 00:37:44.213913  104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 00:37:44.213929  104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	I1212 00:37:44.213946  104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	I1212 00:37:44.216418  104530 out.go:177] 
	W1212 00:37:44.218157  104530 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 00:37:32 UTC, ends at Tue 2023-12-12 00:37:44 UTC. --
	Dec 12 00:37:33 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:37:33 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:37:36 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:37:36 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:37:36 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:37:36 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:37:36 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:37:38 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:37:38 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:37:38 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:37:38 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:37:38 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1212 00:37:44.218182  104530 out.go:239] * 
	W1212 00:37:44.219022  104530 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:37:44.221199  104530 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-12-12 00:36:31 UTC, ends at Tue 2023-12-12 00:37:45 UTC. --
	Dec 12 00:37:05 multinode-859606 dockerd[833]: time="2023-12-12T00:37:05.679427372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 00:37:05 multinode-859606 dockerd[833]: time="2023-12-12T00:37:05.679658304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 00:37:07 multinode-859606 cri-dockerd[1062]: time="2023-12-12T00:37:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a871816c58a42ddd362fd89fa0457159c939b88d434669ab9c87303a2cdce4ea/resolv.conf as [nameserver 192.168.122.1]"
	Dec 12 00:37:08 multinode-859606 dockerd[833]: time="2023-12-12T00:37:08.060585294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 00:37:08 multinode-859606 dockerd[833]: time="2023-12-12T00:37:08.060634616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 00:37:08 multinode-859606 dockerd[833]: time="2023-12-12T00:37:08.060653425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 00:37:08 multinode-859606 dockerd[833]: time="2023-12-12T00:37:08.060667094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.820246675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.820364685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.820401455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.820412808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.821705898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.822643071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.822914208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.823074692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 00:37:12 multinode-859606 cri-dockerd[1062]: time="2023-12-12T00:37:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cf1f78f2e3a90cc24f70123b2504134a6d0123ff6370d1bc64ce6dfdb1255ca3/resolv.conf as [nameserver 192.168.122.1]"
	Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.464886651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.464948238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.464974231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.465070138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 00:37:12 multinode-859606 cri-dockerd[1062]: time="2023-12-12T00:37:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d54ee5c24673d29c1697cc6ea65d3e7ff3e3a6bd5430a949d8748c099c864ebe/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.761304053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.761428711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.761450336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.761504628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d04545a1f3fee       8c811b4aec35f       33 seconds ago      Running             busybox                   2                   d54ee5c24673d       busybox-5bc68d56bd-8rtcm
	6784eb7676333       ead0a4a53df89       33 seconds ago      Running             coredns                   2                   cf1f78f2e3a90       coredns-5dd5756b68-t9jz8
	2b07939ba9ef6       c7d1297425461       38 seconds ago      Running             kindnet-cni               2                   a871816c58a42       kindnet-x2g5d
	c656da1ebafe8       6e38f40d628db       40 seconds ago      Running             storage-provisioner       2                   a3ad9a474f7aa       storage-provisioner
	810342f9e6bb9       83f6cc407eed8       41 seconds ago      Running             kube-proxy                2                   d1a15039b58d3       kube-proxy-prf7f
	8699415e5935b       d058aa5ab969c       46 seconds ago      Running             kube-controller-manager   2                   23f398d4b8027       kube-controller-manager-multinode-859606
	1ebf2246a1889       7fe0e6f37db33       46 seconds ago      Running             kube-apiserver            2                   39f0bea97f6f3       kube-apiserver-multinode-859606
	acd573d2c57e9       73deb9a3f7025       46 seconds ago      Running             etcd                      2                   a7ec9e84f4ed9       etcd-multinode-859606
	407d7ddb64227       e3db313c6dbc0       47 seconds ago      Running             kube-scheduler            2                   0aaed96252109       kube-scheduler-multinode-859606
	263bfb1fd11f8       8c811b4aec35f       3 minutes ago       Exited              busybox                   1                   5d7c24535c7c4       busybox-5bc68d56bd-8rtcm
	abde5ad85d4a0       ead0a4a53df89       3 minutes ago       Exited              coredns                   1                   6960e84b00b86       coredns-5dd5756b68-t9jz8
	55413175770e7       c7d1297425461       3 minutes ago       Exited              kindnet-cni               1                   19421dc217531       kindnet-x2g5d
	56fd6254d6e1f       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       1                   ecfcbd5863212       storage-provisioner
	b63a75f45416a       83f6cc407eed8       3 minutes ago       Exited              kube-proxy                1                   9767a413586e7       kube-proxy-prf7f
	4ba778c674f06       e3db313c6dbc0       3 minutes ago       Exited              kube-scheduler            1                   34ac7e63ee514       kube-scheduler-multinode-859606
	19f9d76e8f1cc       73deb9a3f7025       3 minutes ago       Exited              etcd                      1                   510b18b7b6d68       etcd-multinode-859606
	fc27b85835028       d058aa5ab969c       3 minutes ago       Exited              kube-controller-manager   1                   ed0cff49857f6       kube-controller-manager-multinode-859606
	a49117d4a4c80       7fe0e6f37db33       3 minutes ago       Exited              kube-apiserver            1                   5aa25d818283c       kube-apiserver-multinode-859606
	
	* 
	* ==> coredns [6784eb767633] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56052 - 15722 "HINFO IN 8663818663549164460.3643203038294693926. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021662784s
	
	* 
	* ==> coredns [abde5ad85d4a] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52406 - 34433 "HINFO IN 7865527086462477606.3380958876542272888. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.061926124s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-859606
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-859606
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
	                    minikube.k8s.io/name=multinode-859606
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T00_30_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 00:29:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-859606
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 00:37:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 00:37:09 +0000   Tue, 12 Dec 2023 00:29:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 00:37:09 +0000   Tue, 12 Dec 2023 00:29:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 00:37:09 +0000   Tue, 12 Dec 2023 00:29:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 00:37:09 +0000   Tue, 12 Dec 2023 00:37:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    multinode-859606
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa12b2faaeaf46879d88c9af881444f2
	  System UUID:                fa12b2fa-aeaf-4687-9d88-c9af881444f2
	  Boot ID:                    8cacd70d-3167-4874-8265-e7323653ef3f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8rtcm                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 coredns-5dd5756b68-t9jz8                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m29s
	  kube-system                 etcd-multinode-859606                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m41s
	  kube-system                 kindnet-x2g5d                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m29s
	  kube-system                 kube-apiserver-multinode-859606             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	  kube-system                 kube-controller-manager-multinode-859606    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	  kube-system                 kube-proxy-prf7f                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-scheduler-multinode-859606             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m27s                  kube-proxy       
	  Normal  Starting                 39s                    kube-proxy       
	  Normal  Starting                 3m33s                  kube-proxy       
	  Normal  Starting                 7m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m50s (x8 over 7m50s)  kubelet          Node multinode-859606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m50s (x8 over 7m50s)  kubelet          Node multinode-859606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m50s (x7 over 7m50s)  kubelet          Node multinode-859606 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m42s                  kubelet          Node multinode-859606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m42s                  kubelet          Node multinode-859606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m42s                  kubelet          Node multinode-859606 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m42s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m29s                  node-controller  Node multinode-859606 event: Registered Node multinode-859606 in Controller
	  Normal  NodeReady                7m17s                  kubelet          Node multinode-859606 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    3m41s (x8 over 3m41s)  kubelet          Node multinode-859606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  3m41s (x8 over 3m41s)  kubelet          Node multinode-859606 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     3m41s (x7 over 3m41s)  kubelet          Node multinode-859606 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m22s                  node-controller  Node multinode-859606 event: Registered Node multinode-859606 in Controller
	  Normal  Starting                 49s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  49s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  48s (x8 over 49s)      kubelet          Node multinode-859606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 49s)      kubelet          Node multinode-859606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x7 over 49s)      kubelet          Node multinode-859606 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                    node-controller  Node multinode-859606 event: Registered Node multinode-859606 in Controller
	
	
	Name:               multinode-859606-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-859606-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
	                    minikube.k8s.io/name=multinode-859606
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T00_35_40_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 00:34:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-859606-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 00:35:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 00:35:08 +0000   Tue, 12 Dec 2023 00:34:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 00:35:08 +0000   Tue, 12 Dec 2023 00:34:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 00:35:08 +0000   Tue, 12 Dec 2023 00:34:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 00:35:08 +0000   Tue, 12 Dec 2023 00:35:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    multinode-859606-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4890d00be799442695d20e2e29a3fb1a
	  System UUID:                4890d00b-e799-4426-95d2-0e2e29a3fb1a
	  Boot ID:                    1604b089-1d92-4def-8405-ea47c499ea28
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-npwlc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 kindnet-d4q52               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m34s
	  kube-system                 kube-proxy-q9h26            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  Starting                 2m45s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    6m34s (x2 over 6m34s)  kubelet          Node multinode-859606-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x2 over 6m34s)  kubelet          Node multinode-859606-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m34s (x2 over 6m34s)  kubelet          Node multinode-859606-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m34s                  kubelet          Starting kubelet.
	  Normal  NodeReady                6m22s                  kubelet          Node multinode-859606-m02 status is now: NodeReady
	  Normal  Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node multinode-859606-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node multinode-859606-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node multinode-859606-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m37s                  kubelet          Node multinode-859606-m02 status is now: NodeReady
	  Normal  RegisteredNode           30s                    node-controller  Node multinode-859606-m02 event: Registered Node multinode-859606-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec12 00:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069779] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.352877] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.446664] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151166] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.741585] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.453899] systemd-fstab-generator[512]: Ignoring "noauto" for root device
	[  +0.101582] systemd-fstab-generator[523]: Ignoring "noauto" for root device
	[  +1.302480] systemd-fstab-generator[757]: Ignoring "noauto" for root device
	[  +0.284963] systemd-fstab-generator[794]: Ignoring "noauto" for root device
	[  +0.109891] systemd-fstab-generator[805]: Ignoring "noauto" for root device
	[  +0.121787] systemd-fstab-generator[818]: Ignoring "noauto" for root device
	[  +1.585692] systemd-fstab-generator[1007]: Ignoring "noauto" for root device
	[  +0.117522] systemd-fstab-generator[1018]: Ignoring "noauto" for root device
	[  +0.106548] systemd-fstab-generator[1029]: Ignoring "noauto" for root device
	[  +0.114704] systemd-fstab-generator[1040]: Ignoring "noauto" for root device
	[  +0.119127] systemd-fstab-generator[1054]: Ignoring "noauto" for root device
	[ +11.989836] systemd-fstab-generator[1306]: Ignoring "noauto" for root device
	[  +0.395860] kauditd_printk_skb: 67 callbacks suppressed
	
	* 
	* ==> etcd [19f9d76e8f1c] <==
	* {"level":"info","ts":"2023-12-12T00:34:07.554415Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T00:34:08.814211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T00:34:08.814409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T00:34:08.814454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a received MsgPreVoteResp from 1088a855a4aa8d0a at term 2"}
	{"level":"info","ts":"2023-12-12T00:34:08.814544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T00:34:08.81456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a received MsgVoteResp from 1088a855a4aa8d0a at term 3"}
	{"level":"info","ts":"2023-12-12T00:34:08.814719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became leader at term 3"}
	{"level":"info","ts":"2023-12-12T00:34:08.81475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1088a855a4aa8d0a elected leader 1088a855a4aa8d0a at term 3"}
	{"level":"info","ts":"2023-12-12T00:34:08.817889Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:34:08.81793Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1088a855a4aa8d0a","local-member-attributes":"{Name:multinode-859606 ClientURLs:[https://192.168.39.40:2379]}","request-path":"/0/members/1088a855a4aa8d0a/attributes","cluster-id":"ca485a4cd00ef8c5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T00:34:08.818495Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:34:08.819786Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.40:2379"}
	{"level":"info","ts":"2023-12-12T00:34:08.820582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T00:34:08.821374Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T00:34:08.821452Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T00:35:54.355768Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-12T00:35:54.355918Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-859606","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.40:2380"],"advertise-client-urls":["https://192.168.39.40:2379"]}
	{"level":"warn","ts":"2023-12-12T00:35:54.35605Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T00:35:54.356144Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T00:35:54.378231Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.40:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T00:35:54.378359Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.40:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-12T00:35:54.378407Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1088a855a4aa8d0a","current-leader-member-id":"1088a855a4aa8d0a"}
	{"level":"info","ts":"2023-12-12T00:35:54.382889Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.40:2380"}
	{"level":"info","ts":"2023-12-12T00:35:54.383001Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.40:2380"}
	{"level":"info","ts":"2023-12-12T00:35:54.383016Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-859606","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.40:2380"],"advertise-client-urls":["https://192.168.39.40:2379"]}
	
	* 
	* ==> etcd [acd573d2c57e] <==
	* {"level":"info","ts":"2023-12-12T00:36:59.853853Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T00:36:59.853921Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-12-12T00:36:59.860383Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T00:36:59.86044Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T00:36:59.860447Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T00:36:59.86089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a switched to configuration voters=(1191387187227823370)"}
	{"level":"info","ts":"2023-12-12T00:36:59.860956Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ca485a4cd00ef8c5","local-member-id":"1088a855a4aa8d0a","added-peer-id":"1088a855a4aa8d0a","added-peer-peer-urls":["https://192.168.39.40:2380"]}
	{"level":"info","ts":"2023-12-12T00:36:59.861097Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ca485a4cd00ef8c5","local-member-id":"1088a855a4aa8d0a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:36:59.86112Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T00:36:59.862426Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.40:2380"}
	{"level":"info","ts":"2023-12-12T00:36:59.862439Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.40:2380"}
	{"level":"info","ts":"2023-12-12T00:37:01.295742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a is starting a new election at term 3"}
	{"level":"info","ts":"2023-12-12T00:37:01.296113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became pre-candidate at term 3"}
	{"level":"info","ts":"2023-12-12T00:37:01.296217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a received MsgPreVoteResp from 1088a855a4aa8d0a at term 3"}
	{"level":"info","ts":"2023-12-12T00:37:01.296246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became candidate at term 4"}
	{"level":"info","ts":"2023-12-12T00:37:01.29635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a received MsgVoteResp from 1088a855a4aa8d0a at term 4"}
	{"level":"info","ts":"2023-12-12T00:37:01.296374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became leader at term 4"}
	{"level":"info","ts":"2023-12-12T00:37:01.296534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1088a855a4aa8d0a elected leader 1088a855a4aa8d0a at term 4"}
	{"level":"info","ts":"2023-12-12T00:37:01.300242Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1088a855a4aa8d0a","local-member-attributes":"{Name:multinode-859606 ClientURLs:[https://192.168.39.40:2379]}","request-path":"/0/members/1088a855a4aa8d0a/attributes","cluster-id":"ca485a4cd00ef8c5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T00:37:01.300337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:37:01.301147Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T00:37:01.301196Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T00:37:01.300444Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T00:37:01.302471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T00:37:01.302781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.40:2379"}
	
	* 
	* ==> kernel <==
	*  00:37:45 up 1 min,  0 users,  load average: 0.36, 0.15, 0.05
	Linux multinode-859606 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [2b07939ba9ef] <==
	* I1212 00:37:08.578043       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 00:37:08.578370       1 main.go:107] hostIP = 192.168.39.40
	podIP = 192.168.39.40
	I1212 00:37:08.578814       1 main.go:116] setting mtu 1500 for CNI 
	I1212 00:37:08.578863       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 00:37:08.578886       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 00:37:09.268622       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I1212 00:37:09.268801       1 main.go:227] handling current node
	I1212 00:37:09.269373       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 00:37:09.269459       1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24] 
	I1212 00:37:09.270153       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.65 Flags: [] Table: 0} 
	I1212 00:37:19.282845       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I1212 00:37:19.283164       1 main.go:227] handling current node
	I1212 00:37:19.283224       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 00:37:19.283242       1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24] 
	I1212 00:37:29.296529       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I1212 00:37:29.296715       1 main.go:227] handling current node
	I1212 00:37:29.296760       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 00:37:29.296807       1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24] 
	I1212 00:37:39.311243       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I1212 00:37:39.311305       1 main.go:227] handling current node
	I1212 00:37:39.311335       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 00:37:39.311341       1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kindnet [55413175770e] <==
	* I1212 00:35:16.454929       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I1212 00:35:16.454984       1 main.go:227] handling current node
	I1212 00:35:16.454995       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 00:35:16.455001       1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24] 
	I1212 00:35:16.455337       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I1212 00:35:16.455389       1 main.go:250] Node multinode-859606-m03 has CIDR [10.244.3.0/24] 
	I1212 00:35:26.471853       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I1212 00:35:26.471968       1 main.go:227] handling current node
	I1212 00:35:26.472088       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 00:35:26.472097       1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24] 
	I1212 00:35:26.472358       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I1212 00:35:26.472371       1 main.go:250] Node multinode-859606-m03 has CIDR [10.244.3.0/24] 
	I1212 00:35:36.487874       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I1212 00:35:36.488360       1 main.go:227] handling current node
	I1212 00:35:36.488546       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 00:35:36.488629       1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24] 
	I1212 00:35:36.488840       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I1212 00:35:36.488925       1 main.go:250] Node multinode-859606-m03 has CIDR [10.244.3.0/24] 
	I1212 00:35:46.494897       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I1212 00:35:46.495056       1 main.go:227] handling current node
	I1212 00:35:46.495149       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I1212 00:35:46.495206       1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24] 
	I1212 00:35:46.495503       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I1212 00:35:46.495589       1 main.go:250] Node multinode-859606-m03 has CIDR [10.244.2.0/24] 
	I1212 00:35:46.495749       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.13 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [1ebf2246a188] <==
	* I1212 00:37:02.702627       1 controller.go:116] Starting legacy_token_tracking_controller
	I1212 00:37:02.702671       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I1212 00:37:02.736556       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1212 00:37:02.736761       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1212 00:37:02.802710       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 00:37:02.834500       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:37:02.837240       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:37:02.838205       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 00:37:02.838392       1 aggregator.go:166] initial CRD sync complete...
	I1212 00:37:02.838579       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 00:37:02.838677       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 00:37:02.838752       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:37:02.872198       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 00:37:02.888412       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 00:37:02.888626       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 00:37:02.897702       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 00:37:02.897799       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 00:37:03.694700       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1212 00:37:04.133856       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.40]
	I1212 00:37:04.135091       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 00:37:05.819657       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 00:37:06.057500       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 00:37:06.070583       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 00:37:06.149126       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:37:06.158341       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [a49117d4a4c8] <==
	* W1212 00:36:03.749168       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:03.755376       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:03.761444       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:03.789634       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:03.826969       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:03.835973       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:03.858980       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:03.863797       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:03.896847       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:03.946166       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.009435       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.038722       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.070132       1 logging.go:59] [core] [Channel #184 SubChannel #185] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.079651       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.119604       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.130181       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.136164       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.136486       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.243597       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.254632       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.304649       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.318605       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.327720       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.363477       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 00:36:04.392883       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [8699415e5935] <==
	* I1212 00:37:15.188062       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-859606-m02\" does not exist"
	I1212 00:37:15.193974       1 shared_informer.go:318] Caches are synced for TTL
	I1212 00:37:15.195602       1 shared_informer.go:318] Caches are synced for GC
	I1212 00:37:15.211806       1 shared_informer.go:318] Caches are synced for persistent volume
	I1212 00:37:15.219063       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 00:37:15.230936       1 shared_informer.go:318] Caches are synced for daemon sets
	I1212 00:37:15.241063       1 shared_informer.go:318] Caches are synced for attach detach
	I1212 00:37:15.249064       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1212 00:37:15.284093       1 shared_informer.go:318] Caches are synced for taint
	I1212 00:37:15.285082       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1212 00:37:15.285495       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1212 00:37:15.285713       1 taint_manager.go:210] "Sending events to api server"
	I1212 00:37:15.286437       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-859606"
	I1212 00:37:15.286709       1 shared_informer.go:318] Caches are synced for node
	I1212 00:37:15.286903       1 range_allocator.go:174] "Sending events to api server"
	I1212 00:37:15.286973       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1212 00:37:15.287128       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1212 00:37:15.287247       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1212 00:37:15.287228       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-859606-m02"
	I1212 00:37:15.287677       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1212 00:37:15.290093       1 event.go:307] "Event occurred" object="multinode-859606" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-859606 event: Registered Node multinode-859606 in Controller"
	I1212 00:37:15.290280       1 event.go:307] "Event occurred" object="multinode-859606-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-859606-m02 event: Registered Node multinode-859606-m02 in Controller"
	I1212 00:37:15.627185       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 00:37:15.627247       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 00:37:15.644407       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [fc27b8583502] <==
	* I1212 00:35:08.532461       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-lr9gw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-lr9gw"
	I1212 00:35:13.179054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="75.824µs"
	I1212 00:35:13.284393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="196.544µs"
	I1212 00:35:13.290737       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="93.165µs"
	I1212 00:35:36.003626       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-npwlc"
	I1212 00:35:36.011695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.300235ms"
	I1212 00:35:36.026847       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.01405ms"
	I1212 00:35:36.027555       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="655.698µs"
	I1212 00:35:36.045674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.429µs"
	I1212 00:35:37.908609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="13.802394ms"
	I1212 00:35:37.908718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="54.044µs"
	I1212 00:35:38.012991       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-859606-m02"
	I1212 00:35:38.539106       1 event.go:307] "Event occurred" object="multinode-859606-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-859606-m03 event: Removing Node multinode-859606-m03 from Controller"
	I1212 00:35:38.916634       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-859606-m03\" does not exist"
	I1212 00:35:38.918930       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-859606-m02"
	I1212 00:35:38.921523       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-jrfh4" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-jrfh4"
	I1212 00:35:38.946516       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-859606-m03" podCIDRs=["10.244.2.0/24"]
	I1212 00:35:39.773060       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.621µs"
	I1212 00:35:40.055003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.052µs"
	I1212 00:35:40.062833       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.141µs"
	I1212 00:35:40.066206       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.721µs"
	I1212 00:35:43.539946       1 event.go:307] "Event occurred" object="multinode-859606-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-859606-m03 event: Registered Node multinode-859606-m03 in Controller"
	I1212 00:35:50.130971       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-859606-m03"
	I1212 00:35:52.529054       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-859606-m02"
	I1212 00:35:53.541785       1 event.go:307] "Event occurred" object="multinode-859606-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-859606-m03 event: Removing Node multinode-859606-m03 from Controller"
	
	* 
	* ==> kube-proxy [810342f9e6bb] <==
	* I1212 00:37:05.425583       1 server_others.go:69] "Using iptables proxy"
	I1212 00:37:05.461684       1 node.go:141] Successfully retrieved node IP: 192.168.39.40
	I1212 00:37:05.600627       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 00:37:05.600673       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 00:37:05.604235       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:37:05.605144       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:37:05.605600       1 server.go:846] "Version info" version="v1.28.4"
	I1212 00:37:05.605643       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:37:05.607215       1 config.go:188] "Starting service config controller"
	I1212 00:37:05.607603       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:37:05.607741       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:37:05.607777       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:37:05.611875       1 config.go:315] "Starting node config controller"
	I1212 00:37:05.611943       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:37:05.708678       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 00:37:05.708741       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:37:05.716211       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [b63a75f45416] <==
	* I1212 00:34:11.753699       1 server_others.go:69] "Using iptables proxy"
	I1212 00:34:11.786606       1 node.go:141] Successfully retrieved node IP: 192.168.39.40
	I1212 00:34:11.853481       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 00:34:11.853530       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 00:34:11.855858       1 server_others.go:152] "Using iptables Proxier"
	I1212 00:34:11.856499       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 00:34:11.856927       1 server.go:846] "Version info" version="v1.28.4"
	I1212 00:34:11.856966       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:11.858681       1 config.go:188] "Starting service config controller"
	I1212 00:34:11.859224       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 00:34:11.859381       1 config.go:315] "Starting node config controller"
	I1212 00:34:11.859414       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 00:34:11.859947       1 config.go:97] "Starting endpoint slice config controller"
	I1212 00:34:11.859982       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 00:34:11.959936       1 shared_informer.go:318] Caches are synced for node config
	I1212 00:34:11.959988       1 shared_informer.go:318] Caches are synced for service config
	I1212 00:34:11.961091       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [407d7ddb6422] <==
	* I1212 00:37:00.229914       1 serving.go:348] Generated self-signed cert in-memory
	W1212 00:37:02.799108       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:37:02.799169       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:37:02.799183       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:37:02.799190       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:37:02.854155       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 00:37:02.857110       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:37:02.866129       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:37:02.869202       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:37:02.870581       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 00:37:02.872823       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 00:37:02.970156       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [4ba778c674f0] <==
	* I1212 00:34:08.393863       1 serving.go:348] Generated self-signed cert in-memory
	W1212 00:34:10.311795       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 00:34:10.311894       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 00:34:10.311915       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 00:34:10.312041       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 00:34:10.359778       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 00:34:10.359832       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:34:10.362119       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 00:34:10.362731       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:34:10.363426       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 00:34:10.363524       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 00:34:10.463812       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:35:54.270484       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1212 00:35:54.270615       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1212 00:35:54.271035       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1212 00:35:54.271407       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 00:36:31 UTC, ends at Tue 2023-12-12 00:37:46 UTC. --
	Dec 12 00:37:03 multinode-859606 kubelet[1312]: E1212 00:37:03.774931    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk podName:e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:04.274913614 +0000 UTC m=+7.905251962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wgdzk" (UniqueName: "kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk") pod "busybox-5bc68d56bd-8rtcm" (UID: "e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2") : object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:37:04 multinode-859606 kubelet[1312]: E1212 00:37:04.249069    1312 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 00:37:04 multinode-859606 kubelet[1312]: E1212 00:37:04.249160    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3605a003-e8d6-46b2-8fe7-f45647656622-config-volume podName:3605a003-e8d6-46b2-8fe7-f45647656622 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:05.249144334 +0000 UTC m=+8.879482697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3605a003-e8d6-46b2-8fe7-f45647656622-config-volume") pod "coredns-5dd5756b68-t9jz8" (UID: "3605a003-e8d6-46b2-8fe7-f45647656622") : object "kube-system"/"coredns" not registered
	Dec 12 00:37:04 multinode-859606 kubelet[1312]: E1212 00:37:04.349652    1312 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:37:04 multinode-859606 kubelet[1312]: E1212 00:37:04.349712    1312 projected.go:198] Error preparing data for projected volume kube-api-access-wgdzk for pod default/busybox-5bc68d56bd-8rtcm: object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:37:04 multinode-859606 kubelet[1312]: E1212 00:37:04.349766    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk podName:e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:05.349752141 +0000 UTC m=+8.980090501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wgdzk" (UniqueName: "kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk") pod "busybox-5bc68d56bd-8rtcm" (UID: "e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2") : object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.260507    1312 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.261205    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3605a003-e8d6-46b2-8fe7-f45647656622-config-volume podName:3605a003-e8d6-46b2-8fe7-f45647656622 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:07.261182355 +0000 UTC m=+10.891520705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3605a003-e8d6-46b2-8fe7-f45647656622-config-volume") pod "coredns-5dd5756b68-t9jz8" (UID: "3605a003-e8d6-46b2-8fe7-f45647656622") : object "kube-system"/"coredns" not registered
	Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.367205    1312 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.367244    1312 projected.go:198] Error preparing data for projected volume kube-api-access-wgdzk for pod default/busybox-5bc68d56bd-8rtcm: object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.367339    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk podName:e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:07.367321008 +0000 UTC m=+10.997659370 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wgdzk" (UniqueName: "kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk") pod "busybox-5bc68d56bd-8rtcm" (UID: "e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2") : object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:37:05 multinode-859606 kubelet[1312]: I1212 00:37:05.521465    1312 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ad9a474f7aa2a0c235a5125ee5afda9726fe7b702b1ec852e4ae79591c7981"
	Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.696475    1312 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-t9jz8" podUID="3605a003-e8d6-46b2-8fe7-f45647656622"
	Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.697277    1312 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-8rtcm" podUID="e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2"
	Dec 12 00:37:07 multinode-859606 kubelet[1312]: E1212 00:37:07.285269    1312 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 00:37:07 multinode-859606 kubelet[1312]: E1212 00:37:07.285363    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3605a003-e8d6-46b2-8fe7-f45647656622-config-volume podName:3605a003-e8d6-46b2-8fe7-f45647656622 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:11.285344975 +0000 UTC m=+14.915683323 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3605a003-e8d6-46b2-8fe7-f45647656622-config-volume") pod "coredns-5dd5756b68-t9jz8" (UID: "3605a003-e8d6-46b2-8fe7-f45647656622") : object "kube-system"/"coredns" not registered
	Dec 12 00:37:07 multinode-859606 kubelet[1312]: E1212 00:37:07.386358    1312 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:37:07 multinode-859606 kubelet[1312]: E1212 00:37:07.386406    1312 projected.go:198] Error preparing data for projected volume kube-api-access-wgdzk for pod default/busybox-5bc68d56bd-8rtcm: object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:37:07 multinode-859606 kubelet[1312]: E1212 00:37:07.386451    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk podName:e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:11.386438304 +0000 UTC m=+15.016776664 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wgdzk" (UniqueName: "kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk") pod "busybox-5bc68d56bd-8rtcm" (UID: "e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2") : object "default"/"kube-root-ca.crt" not registered
	Dec 12 00:37:07 multinode-859606 kubelet[1312]: I1212 00:37:07.932205    1312 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a871816c58a42ddd362fd89fa0457159c939b88d434669ab9c87303a2cdce4ea"
	Dec 12 00:37:09 multinode-859606 kubelet[1312]: E1212 00:37:09.037465    1312 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-t9jz8" podUID="3605a003-e8d6-46b2-8fe7-f45647656622"
	Dec 12 00:37:09 multinode-859606 kubelet[1312]: E1212 00:37:09.038182    1312 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-8rtcm" podUID="e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2"
	Dec 12 00:37:09 multinode-859606 kubelet[1312]: I1212 00:37:09.525142    1312 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 00:37:12 multinode-859606 kubelet[1312]: I1212 00:37:12.550452    1312 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d54ee5c24673d29c1697cc6ea65d3e7ff3e3a6bd5430a949d8748c099c864ebe"
	Dec 12 00:37:12 multinode-859606 kubelet[1312]: I1212 00:37:12.635414    1312 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf1f78f2e3a90cc24f70123b2504134a6d0123ff6370d1bc64ce6dfdb1255ca3"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-859606 -n multinode-859606
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-859606 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (87.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1212 00:57:41.435277   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p bridge-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : exit status 90 (1m0.093445704s)

                                                
                                                
-- stdout --
	* [bridge-826505] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node bridge-826505 in cluster bridge-826505
	* Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:57:22.945696  123289 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:57:22.945997  123289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:57:22.946007  123289 out.go:309] Setting ErrFile to fd 2...
	I1212 00:57:22.946011  123289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:57:22.946190  123289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
	I1212 00:57:22.946873  123289 out.go:303] Setting JSON to false
	I1212 00:57:22.948163  123289 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":13143,"bootTime":1702329500,"procs":387,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:57:22.948222  123289 start.go:138] virtualization: kvm guest
	I1212 00:57:22.950362  123289 out.go:177] * [bridge-826505] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:57:22.951894  123289 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:57:22.951903  123289 notify.go:220] Checking for updates...
	I1212 00:57:22.953404  123289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:57:22.954921  123289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:57:22.956496  123289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	I1212 00:57:22.957935  123289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:57:22.959519  123289 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:57:22.961473  123289 config.go:182] Loaded profile config "enable-default-cni-826505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:57:22.961564  123289 config.go:182] Loaded profile config "false-826505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:57:22.961641  123289 config.go:182] Loaded profile config "flannel-826505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:57:22.961717  123289 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:57:23.008063  123289 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 00:57:23.009651  123289 start.go:298] selected driver: kvm2
	I1212 00:57:23.009672  123289 start.go:902] validating driver "kvm2" against <nil>
	I1212 00:57:23.009687  123289 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:57:23.010744  123289 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:57:23.010850  123289 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17764-80294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:57:23.030832  123289 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 00:57:23.030892  123289 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 00:57:23.031174  123289 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:57:23.031271  123289 cni.go:84] Creating CNI manager for "bridge"
	I1212 00:57:23.031291  123289 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 00:57:23.031303  123289 start_flags.go:323] config:
	{Name:bridge-826505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-826505 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:57:23.031517  123289 iso.go:125] acquiring lock: {Name:mk9f395cbf4246894893bf64341667bb412992c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:57:23.033233  123289 out.go:177] * Starting control plane node bridge-826505 in cluster bridge-826505
	I1212 00:57:23.034711  123289 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 00:57:23.034760  123289 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 00:57:23.034771  123289 cache.go:56] Caching tarball of preloaded images
	I1212 00:57:23.034869  123289 preload.go:174] Found /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 00:57:23.034886  123289 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 00:57:23.035033  123289 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/bridge-826505/config.json ...
	I1212 00:57:23.035062  123289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/bridge-826505/config.json: {Name:mk95e3fc513b87e3fbee225cb1615c91f6891c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:57:23.035218  123289 start.go:365] acquiring machines lock for bridge-826505: {Name:mk381e91746c2e5b8a4620fe3fd447d80375e413 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:57:53.540927  123289 start.go:369] acquired machines lock for "bridge-826505" in 30.505681076s
	I1212 00:57:53.541002  123289 start.go:93] Provisioning new machine with config: &{Name:bridge-826505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:bridge-826505 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 00:57:53.541141  123289 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 00:57:53.543441  123289 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1212 00:57:53.543721  123289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:57:53.543775  123289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:57:53.560372  123289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35225
	I1212 00:57:53.560841  123289 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:57:53.561451  123289 main.go:141] libmachine: Using API Version  1
	I1212 00:57:53.561478  123289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:57:53.561823  123289 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:57:53.562001  123289 main.go:141] libmachine: (bridge-826505) Calling .GetMachineName
	I1212 00:57:53.562173  123289 main.go:141] libmachine: (bridge-826505) Calling .DriverName
	I1212 00:57:53.562392  123289 start.go:159] libmachine.API.Create for "bridge-826505" (driver="kvm2")
	I1212 00:57:53.562426  123289 client.go:168] LocalClient.Create starting
	I1212 00:57:53.562468  123289 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem
	I1212 00:57:53.562515  123289 main.go:141] libmachine: Decoding PEM data...
	I1212 00:57:53.562545  123289 main.go:141] libmachine: Parsing certificate...
	I1212 00:57:53.562610  123289 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem
	I1212 00:57:53.562641  123289 main.go:141] libmachine: Decoding PEM data...
	I1212 00:57:53.562662  123289 main.go:141] libmachine: Parsing certificate...
	I1212 00:57:53.562693  123289 main.go:141] libmachine: Running pre-create checks...
	I1212 00:57:53.562708  123289 main.go:141] libmachine: (bridge-826505) Calling .PreCreateCheck
	I1212 00:57:53.563135  123289 main.go:141] libmachine: (bridge-826505) Calling .GetConfigRaw
	I1212 00:57:53.563629  123289 main.go:141] libmachine: Creating machine...
	I1212 00:57:53.563649  123289 main.go:141] libmachine: (bridge-826505) Calling .Create
	I1212 00:57:53.563794  123289 main.go:141] libmachine: (bridge-826505) Creating KVM machine...
	I1212 00:57:53.565025  123289 main.go:141] libmachine: (bridge-826505) DBG | found existing default KVM network
	I1212 00:57:53.566359  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:53.566193  123696 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ee:21:ca} reservation:<nil>}
	I1212 00:57:53.567434  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:53.567329  123696 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e2:d3:1b} reservation:<nil>}
	I1212 00:57:53.568797  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:53.568696  123696 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000306980}
	I1212 00:57:53.574682  123289 main.go:141] libmachine: (bridge-826505) DBG | trying to create private KVM network mk-bridge-826505 192.168.61.0/24...
	I1212 00:57:53.660659  123289 main.go:141] libmachine: (bridge-826505) DBG | private KVM network mk-bridge-826505 192.168.61.0/24 created
	I1212 00:57:53.660692  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:53.660633  123696 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17764-80294/.minikube
	I1212 00:57:53.660707  123289 main.go:141] libmachine: (bridge-826505) Setting up store path in /home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505 ...
	I1212 00:57:53.660738  123289 main.go:141] libmachine: (bridge-826505) Building disk image from file:///home/jenkins/minikube-integration/17764-80294/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 00:57:53.660770  123289 main.go:141] libmachine: (bridge-826505) Downloading /home/jenkins/minikube-integration/17764-80294/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17764-80294/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 00:57:53.903928  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:53.903733  123696 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505/id_rsa...
	I1212 00:57:54.001899  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:54.001740  123696 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505/bridge-826505.rawdisk...
	I1212 00:57:54.001927  123289 main.go:141] libmachine: (bridge-826505) DBG | Writing magic tar header
	I1212 00:57:54.001942  123289 main.go:141] libmachine: (bridge-826505) DBG | Writing SSH key tar header
	I1212 00:57:54.001955  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:54.001910  123696 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505 ...
	I1212 00:57:54.002063  123289 main.go:141] libmachine: (bridge-826505) Setting executable bit set on /home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505 (perms=drwx------)
	I1212 00:57:54.002094  123289 main.go:141] libmachine: (bridge-826505) Setting executable bit set on /home/jenkins/minikube-integration/17764-80294/.minikube/machines (perms=drwxr-xr-x)
	I1212 00:57:54.002106  123289 main.go:141] libmachine: (bridge-826505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505
	I1212 00:57:54.002119  123289 main.go:141] libmachine: (bridge-826505) Setting executable bit set on /home/jenkins/minikube-integration/17764-80294/.minikube (perms=drwxr-xr-x)
	I1212 00:57:54.002135  123289 main.go:141] libmachine: (bridge-826505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17764-80294/.minikube/machines
	I1212 00:57:54.002150  123289 main.go:141] libmachine: (bridge-826505) Setting executable bit set on /home/jenkins/minikube-integration/17764-80294 (perms=drwxrwxr-x)
	I1212 00:57:54.002167  123289 main.go:141] libmachine: (bridge-826505) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 00:57:54.002180  123289 main.go:141] libmachine: (bridge-826505) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 00:57:54.002195  123289 main.go:141] libmachine: (bridge-826505) Creating domain...
	I1212 00:57:54.002216  123289 main.go:141] libmachine: (bridge-826505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17764-80294/.minikube
	I1212 00:57:54.002234  123289 main.go:141] libmachine: (bridge-826505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17764-80294
	I1212 00:57:54.002251  123289 main.go:141] libmachine: (bridge-826505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 00:57:54.002266  123289 main.go:141] libmachine: (bridge-826505) DBG | Checking permissions on dir: /home/jenkins
	I1212 00:57:54.002278  123289 main.go:141] libmachine: (bridge-826505) DBG | Checking permissions on dir: /home
	I1212 00:57:54.002291  123289 main.go:141] libmachine: (bridge-826505) DBG | Skipping /home - not owner
	I1212 00:57:54.003511  123289 main.go:141] libmachine: (bridge-826505) define libvirt domain using xml: 
	I1212 00:57:54.003539  123289 main.go:141] libmachine: (bridge-826505) <domain type='kvm'>
	I1212 00:57:54.003551  123289 main.go:141] libmachine: (bridge-826505)   <name>bridge-826505</name>
	I1212 00:57:54.003565  123289 main.go:141] libmachine: (bridge-826505)   <memory unit='MiB'>3072</memory>
	I1212 00:57:54.003575  123289 main.go:141] libmachine: (bridge-826505)   <vcpu>2</vcpu>
	I1212 00:57:54.003587  123289 main.go:141] libmachine: (bridge-826505)   <features>
	I1212 00:57:54.003599  123289 main.go:141] libmachine: (bridge-826505)     <acpi/>
	I1212 00:57:54.003610  123289 main.go:141] libmachine: (bridge-826505)     <apic/>
	I1212 00:57:54.003647  123289 main.go:141] libmachine: (bridge-826505)     <pae/>
	I1212 00:57:54.003673  123289 main.go:141] libmachine: (bridge-826505)     
	I1212 00:57:54.003700  123289 main.go:141] libmachine: (bridge-826505)   </features>
	I1212 00:57:54.003713  123289 main.go:141] libmachine: (bridge-826505)   <cpu mode='host-passthrough'>
	I1212 00:57:54.003725  123289 main.go:141] libmachine: (bridge-826505)   
	I1212 00:57:54.003740  123289 main.go:141] libmachine: (bridge-826505)   </cpu>
	I1212 00:57:54.003776  123289 main.go:141] libmachine: (bridge-826505)   <os>
	I1212 00:57:54.003801  123289 main.go:141] libmachine: (bridge-826505)     <type>hvm</type>
	I1212 00:57:54.003816  123289 main.go:141] libmachine: (bridge-826505)     <boot dev='cdrom'/>
	I1212 00:57:54.003827  123289 main.go:141] libmachine: (bridge-826505)     <boot dev='hd'/>
	I1212 00:57:54.003842  123289 main.go:141] libmachine: (bridge-826505)     <bootmenu enable='no'/>
	I1212 00:57:54.003853  123289 main.go:141] libmachine: (bridge-826505)   </os>
	I1212 00:57:54.003866  123289 main.go:141] libmachine: (bridge-826505)   <devices>
	I1212 00:57:54.003878  123289 main.go:141] libmachine: (bridge-826505)     <disk type='file' device='cdrom'>
	I1212 00:57:54.003899  123289 main.go:141] libmachine: (bridge-826505)       <source file='/home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505/boot2docker.iso'/>
	I1212 00:57:54.003927  123289 main.go:141] libmachine: (bridge-826505)       <target dev='hdc' bus='scsi'/>
	I1212 00:57:54.003941  123289 main.go:141] libmachine: (bridge-826505)       <readonly/>
	I1212 00:57:54.003955  123289 main.go:141] libmachine: (bridge-826505)     </disk>
	I1212 00:57:54.003964  123289 main.go:141] libmachine: (bridge-826505)     <disk type='file' device='disk'>
	I1212 00:57:54.003979  123289 main.go:141] libmachine: (bridge-826505)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 00:57:54.004006  123289 main.go:141] libmachine: (bridge-826505)       <source file='/home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505/bridge-826505.rawdisk'/>
	I1212 00:57:54.004026  123289 main.go:141] libmachine: (bridge-826505)       <target dev='hda' bus='virtio'/>
	I1212 00:57:54.004036  123289 main.go:141] libmachine: (bridge-826505)     </disk>
	I1212 00:57:54.004049  123289 main.go:141] libmachine: (bridge-826505)     <interface type='network'>
	I1212 00:57:54.004064  123289 main.go:141] libmachine: (bridge-826505)       <source network='mk-bridge-826505'/>
	I1212 00:57:54.004076  123289 main.go:141] libmachine: (bridge-826505)       <model type='virtio'/>
	I1212 00:57:54.004089  123289 main.go:141] libmachine: (bridge-826505)     </interface>
	I1212 00:57:54.004101  123289 main.go:141] libmachine: (bridge-826505)     <interface type='network'>
	I1212 00:57:54.004115  123289 main.go:141] libmachine: (bridge-826505)       <source network='default'/>
	I1212 00:57:54.004128  123289 main.go:141] libmachine: (bridge-826505)       <model type='virtio'/>
	I1212 00:57:54.004141  123289 main.go:141] libmachine: (bridge-826505)     </interface>
	I1212 00:57:54.004153  123289 main.go:141] libmachine: (bridge-826505)     <serial type='pty'>
	I1212 00:57:54.004166  123289 main.go:141] libmachine: (bridge-826505)       <target port='0'/>
	I1212 00:57:54.004178  123289 main.go:141] libmachine: (bridge-826505)     </serial>
	I1212 00:57:54.004190  123289 main.go:141] libmachine: (bridge-826505)     <console type='pty'>
	I1212 00:57:54.004202  123289 main.go:141] libmachine: (bridge-826505)       <target type='serial' port='0'/>
	I1212 00:57:54.004212  123289 main.go:141] libmachine: (bridge-826505)     </console>
	I1212 00:57:54.004224  123289 main.go:141] libmachine: (bridge-826505)     <rng model='virtio'>
	I1212 00:57:54.004243  123289 main.go:141] libmachine: (bridge-826505)       <backend model='random'>/dev/random</backend>
	I1212 00:57:54.004255  123289 main.go:141] libmachine: (bridge-826505)     </rng>
	I1212 00:57:54.004267  123289 main.go:141] libmachine: (bridge-826505)     
	I1212 00:57:54.004275  123289 main.go:141] libmachine: (bridge-826505)     
	I1212 00:57:54.004288  123289 main.go:141] libmachine: (bridge-826505)   </devices>
	I1212 00:57:54.004299  123289 main.go:141] libmachine: (bridge-826505) </domain>
	I1212 00:57:54.004311  123289 main.go:141] libmachine: (bridge-826505) 
	I1212 00:57:54.011310  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:16:ef:67 in network default
	I1212 00:57:54.011961  123289 main.go:141] libmachine: (bridge-826505) Ensuring networks are active...
	I1212 00:57:54.012003  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:57:54.012760  123289 main.go:141] libmachine: (bridge-826505) Ensuring network default is active
	I1212 00:57:54.013163  123289 main.go:141] libmachine: (bridge-826505) Ensuring network mk-bridge-826505 is active
	I1212 00:57:54.013824  123289 main.go:141] libmachine: (bridge-826505) Getting domain xml...
	I1212 00:57:54.014700  123289 main.go:141] libmachine: (bridge-826505) Creating domain...
	I1212 00:57:55.302338  123289 main.go:141] libmachine: (bridge-826505) Waiting to get IP...
	I1212 00:57:55.303221  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:57:55.303731  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:57:55.303768  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:55.303695  123696 retry.go:31] will retry after 195.112515ms: waiting for machine to come up
	I1212 00:57:55.500285  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:57:55.500918  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:57:55.500953  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:55.500838  123696 retry.go:31] will retry after 254.928254ms: waiting for machine to come up
	I1212 00:57:55.757525  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:57:55.758060  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:57:55.758091  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:55.758010  123696 retry.go:31] will retry after 419.675453ms: waiting for machine to come up
	I1212 00:57:56.179632  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:57:56.180308  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:57:56.180337  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:56.180256  123696 retry.go:31] will retry after 368.43039ms: waiting for machine to come up
	I1212 00:57:56.549977  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:57:56.550578  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:57:56.550604  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:56.550518  123696 retry.go:31] will retry after 466.377324ms: waiting for machine to come up
	I1212 00:57:57.018142  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:57:57.018735  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:57:57.018765  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:57.018666  123696 retry.go:31] will retry after 914.939187ms: waiting for machine to come up
	I1212 00:57:57.935103  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:57:57.935841  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:57:57.935876  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:57.935751  123696 retry.go:31] will retry after 874.607993ms: waiting for machine to come up
	I1212 00:57:58.811717  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:57:58.812402  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:57:58.812431  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:57:58.812369  123696 retry.go:31] will retry after 1.488719173s: waiting for machine to come up
	I1212 00:58:00.303123  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:00.303643  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:58:00.303668  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:58:00.303600  123696 retry.go:31] will retry after 1.836513146s: waiting for machine to come up
	I1212 00:58:02.141762  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:02.142304  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:58:02.142336  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:58:02.142253  123696 retry.go:31] will retry after 1.60992263s: waiting for machine to come up
	I1212 00:58:03.754000  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:03.754720  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:58:03.754753  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:58:03.754705  123696 retry.go:31] will retry after 2.725977665s: waiting for machine to come up
	I1212 00:58:06.482605  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:06.483282  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:58:06.483312  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:58:06.483208  123696 retry.go:31] will retry after 3.531566155s: waiting for machine to come up
	I1212 00:58:10.016561  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:10.018399  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:58:10.018427  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:58:10.018299  123696 retry.go:31] will retry after 3.525743737s: waiting for machine to come up
	I1212 00:58:13.547631  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:13.548237  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find current IP address of domain bridge-826505 in network mk-bridge-826505
	I1212 00:58:13.548263  123289 main.go:141] libmachine: (bridge-826505) DBG | I1212 00:58:13.548180  123696 retry.go:31] will retry after 3.535825685s: waiting for machine to come up
	I1212 00:58:17.300169  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:17.300648  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has current primary IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:17.300676  123289 main.go:141] libmachine: (bridge-826505) Found IP for machine: 192.168.61.215
	I1212 00:58:17.300691  123289 main.go:141] libmachine: (bridge-826505) Reserving static IP address...
	I1212 00:58:17.301061  123289 main.go:141] libmachine: (bridge-826505) DBG | unable to find host DHCP lease matching {name: "bridge-826505", mac: "52:54:00:ff:38:77", ip: "192.168.61.215"} in network mk-bridge-826505
	I1212 00:58:17.380746  123289 main.go:141] libmachine: (bridge-826505) Reserved static IP address: 192.168.61.215
	I1212 00:58:17.380790  123289 main.go:141] libmachine: (bridge-826505) Waiting for SSH to be available...
	I1212 00:58:17.380802  123289 main.go:141] libmachine: (bridge-826505) DBG | Getting to WaitForSSH function...
	I1212 00:58:17.383426  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:17.383841  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:17.383867  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:17.384102  123289 main.go:141] libmachine: (bridge-826505) DBG | Using SSH client type: external
	I1212 00:58:17.384132  123289 main.go:141] libmachine: (bridge-826505) DBG | Using SSH private key: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505/id_rsa (-rw-------)
	I1212 00:58:17.384162  123289 main.go:141] libmachine: (bridge-826505) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:58:17.384173  123289 main.go:141] libmachine: (bridge-826505) DBG | About to run SSH command:
	I1212 00:58:17.384197  123289 main.go:141] libmachine: (bridge-826505) DBG | exit 0
	I1212 00:58:17.487969  123289 main.go:141] libmachine: (bridge-826505) DBG | SSH cmd err, output: <nil>: 
	I1212 00:58:17.488316  123289 main.go:141] libmachine: (bridge-826505) KVM machine creation complete!
	I1212 00:58:17.488711  123289 main.go:141] libmachine: (bridge-826505) Calling .GetConfigRaw
	I1212 00:58:17.489307  123289 main.go:141] libmachine: (bridge-826505) Calling .DriverName
	I1212 00:58:17.489566  123289 main.go:141] libmachine: (bridge-826505) Calling .DriverName
	I1212 00:58:17.489755  123289 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 00:58:17.489775  123289 main.go:141] libmachine: (bridge-826505) Calling .GetState
	I1212 00:58:17.491341  123289 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 00:58:17.491360  123289 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 00:58:17.491369  123289 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 00:58:17.491378  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:17.494431  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:17.495023  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:17.495052  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:17.495275  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHPort
	I1212 00:58:17.495493  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:17.495735  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:17.495907  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHUsername
	I1212 00:58:17.496091  123289 main.go:141] libmachine: Using SSH client type: native
	I1212 00:58:17.496601  123289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I1212 00:58:17.496624  123289 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 00:58:17.631670  123289 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:58:17.631703  123289 main.go:141] libmachine: Detecting the provisioner...
	I1212 00:58:17.631726  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:17.635321  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:17.635843  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:17.635870  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:17.636149  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHPort
	I1212 00:58:17.636411  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:17.636634  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:17.636812  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHUsername
	I1212 00:58:17.636988  123289 main.go:141] libmachine: Using SSH client type: native
	I1212 00:58:17.637466  123289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I1212 00:58:17.637486  123289 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 00:58:17.781086  123289 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 00:58:17.781212  123289 main.go:141] libmachine: found compatible host: buildroot
	I1212 00:58:17.781228  123289 main.go:141] libmachine: Provisioning with buildroot...
	I1212 00:58:17.781241  123289 main.go:141] libmachine: (bridge-826505) Calling .GetMachineName
	I1212 00:58:17.781570  123289 buildroot.go:166] provisioning hostname "bridge-826505"
	I1212 00:58:17.781605  123289 main.go:141] libmachine: (bridge-826505) Calling .GetMachineName
	I1212 00:58:17.781841  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:17.784827  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:17.785316  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:17.785366  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:17.785535  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHPort
	I1212 00:58:17.785732  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:17.785882  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:17.786109  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHUsername
	I1212 00:58:17.786353  123289 main.go:141] libmachine: Using SSH client type: native
	I1212 00:58:17.786814  123289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I1212 00:58:17.786839  123289 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-826505 && echo "bridge-826505" | sudo tee /etc/hostname
	I1212 00:58:17.940005  123289 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-826505
	
	I1212 00:58:17.940052  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:17.943079  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:17.943499  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:17.943536  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:17.943784  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHPort
	I1212 00:58:17.944013  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:17.944195  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:17.944360  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHUsername
	I1212 00:58:17.944570  123289 main.go:141] libmachine: Using SSH client type: native
	I1212 00:58:17.944947  123289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I1212 00:58:17.944966  123289 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-826505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-826505/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-826505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:58:18.093154  123289 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:58:18.093196  123289 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17764-80294/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-80294/.minikube}
	I1212 00:58:18.093218  123289 buildroot.go:174] setting up certificates
	I1212 00:58:18.093232  123289 provision.go:83] configureAuth start
	I1212 00:58:18.093246  123289 main.go:141] libmachine: (bridge-826505) Calling .GetMachineName
	I1212 00:58:18.093642  123289 main.go:141] libmachine: (bridge-826505) Calling .GetIP
	I1212 00:58:18.096564  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:18.097008  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:18.097042  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:18.097243  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:18.099745  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:18.100092  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:18.100141  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:18.100280  123289 provision.go:138] copyHostCerts
	I1212 00:58:18.100341  123289 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem, removing ...
	I1212 00:58:18.100358  123289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
	I1212 00:58:18.100405  123289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem (1078 bytes)
	I1212 00:58:18.100497  123289 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem, removing ...
	I1212 00:58:18.100508  123289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
	I1212 00:58:18.100553  123289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem (1123 bytes)
	I1212 00:58:18.100631  123289 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem, removing ...
	I1212 00:58:18.100644  123289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
	I1212 00:58:18.100671  123289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem (1679 bytes)
	I1212 00:58:18.100736  123289 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem org=jenkins.bridge-826505 san=[192.168.61.215 192.168.61.215 localhost 127.0.0.1 minikube bridge-826505]
	I1212 00:58:18.140609  123289 provision.go:172] copyRemoteCerts
	I1212 00:58:18.140671  123289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:58:18.140695  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:18.143381  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:18.143748  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:18.143776  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:18.143948  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHPort
	I1212 00:58:18.144204  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:18.144367  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHUsername
	I1212 00:58:18.144553  123289 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505/id_rsa Username:docker}
	I1212 00:58:18.248996  123289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:58:18.275553  123289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 00:58:18.305304  123289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:58:18.328585  123289 provision.go:86] duration metric: configureAuth took 235.334815ms
	I1212 00:58:18.328624  123289 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:58:18.328848  123289 config.go:182] Loaded profile config "bridge-826505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:58:18.328882  123289 main.go:141] libmachine: (bridge-826505) Calling .DriverName
	I1212 00:58:18.329193  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:18.331969  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:18.332378  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:18.332412  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:18.332626  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHPort
	I1212 00:58:18.332855  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:18.333062  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:18.333237  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHUsername
	I1212 00:58:18.333446  123289 main.go:141] libmachine: Using SSH client type: native
	I1212 00:58:18.333943  123289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I1212 00:58:18.333964  123289 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 00:58:18.469757  123289 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 00:58:18.469794  123289 buildroot.go:70] root file system type: tmpfs
	I1212 00:58:18.469975  123289 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 00:58:18.470003  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:18.473055  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:18.473403  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:18.473435  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:18.473603  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHPort
	I1212 00:58:18.473809  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:18.473975  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:18.474130  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHUsername
	I1212 00:58:18.474329  123289 main.go:141] libmachine: Using SSH client type: native
	I1212 00:58:18.474782  123289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I1212 00:58:18.474892  123289 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 00:58:18.625069  123289 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 00:58:18.625109  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:18.628233  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:18.628697  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:18.628733  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:18.629066  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHPort
	I1212 00:58:18.629307  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:18.629515  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:18.629683  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHUsername
	I1212 00:58:18.629861  123289 main.go:141] libmachine: Using SSH client type: native
	I1212 00:58:18.630348  123289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I1212 00:58:18.630376  123289 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 00:58:19.568603  123289 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 00:58:19.568648  123289 main.go:141] libmachine: Checking connection to Docker...
	I1212 00:58:19.568662  123289 main.go:141] libmachine: (bridge-826505) Calling .GetURL
	I1212 00:58:19.570086  123289 main.go:141] libmachine: (bridge-826505) DBG | Using libvirt version 6000000
	I1212 00:58:19.572556  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.572968  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:19.573000  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.573149  123289 main.go:141] libmachine: Docker is up and running!
	I1212 00:58:19.573177  123289 main.go:141] libmachine: Reticulating splines...
	I1212 00:58:19.573187  123289 client.go:171] LocalClient.Create took 26.010751129s
	I1212 00:58:19.573211  123289 start.go:167] duration metric: libmachine.API.Create for "bridge-826505" took 26.01082414s
	I1212 00:58:19.573224  123289 start.go:300] post-start starting for "bridge-826505" (driver="kvm2")
	I1212 00:58:19.573237  123289 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:58:19.573259  123289 main.go:141] libmachine: (bridge-826505) Calling .DriverName
	I1212 00:58:19.573532  123289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:58:19.573567  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:19.575573  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.575860  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:19.575891  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.576015  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHPort
	I1212 00:58:19.576196  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:19.576332  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHUsername
	I1212 00:58:19.576608  123289 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505/id_rsa Username:docker}
	I1212 00:58:19.675828  123289 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:58:19.682257  123289 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 00:58:19.682290  123289 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/addons for local assets ...
	I1212 00:58:19.682359  123289 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/files for local assets ...
	I1212 00:58:19.682459  123289 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> 876092.pem in /etc/ssl/certs
	I1212 00:58:19.682594  123289 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:58:19.692019  123289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /etc/ssl/certs/876092.pem (1708 bytes)
	I1212 00:58:19.717948  123289 start.go:303] post-start completed in 144.707144ms
	I1212 00:58:19.718002  123289 main.go:141] libmachine: (bridge-826505) Calling .GetConfigRaw
	I1212 00:58:19.718641  123289 main.go:141] libmachine: (bridge-826505) Calling .GetIP
	I1212 00:58:19.721537  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.721822  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:19.721853  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.722220  123289 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/bridge-826505/config.json ...
	I1212 00:58:19.722513  123289 start.go:128] duration metric: createHost completed in 26.181340956s
	I1212 00:58:19.722553  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:19.725069  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.725505  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:19.725534  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.725652  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHPort
	I1212 00:58:19.725850  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:19.726027  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:19.726174  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHUsername
	I1212 00:58:19.726384  123289 main.go:141] libmachine: Using SSH client type: native
	I1212 00:58:19.726872  123289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I1212 00:58:19.726892  123289 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:58:19.861749  123289 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702342699.833360543
	
	I1212 00:58:19.861778  123289 fix.go:206] guest clock: 1702342699.833360543
	I1212 00:58:19.861800  123289 fix.go:219] Guest: 2023-12-12 00:58:19.833360543 +0000 UTC Remote: 2023-12-12 00:58:19.722536205 +0000 UTC m=+56.829005451 (delta=110.824338ms)
	I1212 00:58:19.861821  123289 fix.go:190] guest clock delta is within tolerance: 110.824338ms
	I1212 00:58:19.861828  123289 start.go:83] releasing machines lock for "bridge-826505", held for 26.320862558s
	I1212 00:58:19.862349  123289 main.go:141] libmachine: (bridge-826505) Calling .DriverName
	I1212 00:58:19.862696  123289 main.go:141] libmachine: (bridge-826505) Calling .GetIP
	I1212 00:58:19.865824  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.866209  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:19.866247  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.866455  123289 main.go:141] libmachine: (bridge-826505) Calling .DriverName
	I1212 00:58:19.867026  123289 main.go:141] libmachine: (bridge-826505) Calling .DriverName
	I1212 00:58:19.867232  123289 main.go:141] libmachine: (bridge-826505) Calling .DriverName
	I1212 00:58:19.867310  123289 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:58:19.867347  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:19.867423  123289 ssh_runner.go:195] Run: cat /version.json
	I1212 00:58:19.867439  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHHostname
	I1212 00:58:19.870234  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.870476  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.870601  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:19.870629  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.870781  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHPort
	I1212 00:58:19.870909  123289 main.go:141] libmachine: (bridge-826505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:38:77", ip: ""} in network mk-bridge-826505: {Iface:virbr3 ExpiryTime:2023-12-12 01:58:12 +0000 UTC Type:0 Mac:52:54:00:ff:38:77 Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:bridge-826505 Clientid:01:52:54:00:ff:38:77}
	I1212 00:58:19.870943  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:19.870989  123289 main.go:141] libmachine: (bridge-826505) DBG | domain bridge-826505 has defined IP address 192.168.61.215 and MAC address 52:54:00:ff:38:77 in network mk-bridge-826505
	I1212 00:58:19.871135  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHUsername
	I1212 00:58:19.871137  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHPort
	I1212 00:58:19.871322  123289 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505/id_rsa Username:docker}
	I1212 00:58:19.871324  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHKeyPath
	I1212 00:58:19.871505  123289 main.go:141] libmachine: (bridge-826505) Calling .GetSSHUsername
	I1212 00:58:19.871651  123289 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/bridge-826505/id_rsa Username:docker}
	I1212 00:58:19.997289  123289 ssh_runner.go:195] Run: systemctl --version
	I1212 00:58:20.005115  123289 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:58:20.011999  123289 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:58:20.012089  123289 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:58:20.030477  123289 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:58:20.030511  123289 start.go:475] detecting cgroup driver to use...
	I1212 00:58:20.030645  123289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:58:20.053144  123289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 00:58:20.065552  123289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:58:20.076958  123289 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:58:20.077065  123289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:58:20.087868  123289 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:58:20.101599  123289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:58:20.113694  123289 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:58:20.124029  123289 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:58:20.135454  123289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:58:20.146467  123289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:58:20.156121  123289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:58:20.168256  123289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:58:20.278033  123289 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:58:20.297940  123289 start.go:475] detecting cgroup driver to use...
	I1212 00:58:20.298043  123289 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 00:58:20.315216  123289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:58:20.337363  123289 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:58:20.365257  123289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:58:20.381423  123289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:58:20.395711  123289 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:58:20.440704  123289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:58:20.456398  123289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:58:20.479272  123289 ssh_runner.go:195] Run: which cri-dockerd
	I1212 00:58:20.484107  123289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 00:58:20.495444  123289 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 00:58:20.517236  123289 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 00:58:20.662574  123289 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 00:58:20.785758  123289 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 00:58:20.785951  123289 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 00:58:20.806620  123289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:58:20.945668  123289 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 00:58:22.426952  123289 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.48122307s)
	I1212 00:58:22.427044  123289 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 00:58:22.554932  123289 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 00:58:22.693471  123289 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 00:58:22.817046  123289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:58:22.936887  123289 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 00:58:22.954273  123289 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1212 00:58:22.969904  123289 out.go:177] 
	W1212 00:58:22.971427  123289 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 00:58:08 UTC, ends at Tue 2023-12-12 00:58:22 UTC. --
	Dec 12 00:58:09 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:58:09 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:58:14 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:58:14 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:58:14 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:58:14 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:58:14 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:58:16 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:58:16 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:58:16 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:58:16 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:58:16 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:58:18 bridge-826505 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:58:18 bridge-826505 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:58:18 bridge-826505 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:58:18 bridge-826505 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:58:18 bridge-826505 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:58:22 bridge-826505 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:58:22 bridge-826505 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:58:22 bridge-826505 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:58:22 bridge-826505 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 00:58:22 bridge-826505 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 00:58:08 UTC, ends at Tue 2023-12-12 00:58:22 UTC. --
	Dec 12 00:58:09 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:58:09 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:58:14 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:58:14 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:58:14 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:58:14 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:58:14 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:58:16 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:58:16 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:58:16 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:58:16 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:58:16 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:58:18 bridge-826505 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:58:18 bridge-826505 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:58:18 bridge-826505 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:58:18 bridge-826505 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 00:58:18 bridge-826505 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 00:58:22 bridge-826505 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 00:58:22 bridge-826505 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 00:58:22 bridge-826505 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 00:58:22 bridge-826505 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 00:58:22 bridge-826505 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1212 00:58:22.971462  123289 out.go:239] * 
	* 
	W1212 00:58:22.972693  123289 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:58:22.974551  123289 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/bridge/Start (60.11s)

                                                
                                    

Test pass (287/323)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.28
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 4.31
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.2/json-events 10.83
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.15
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
26 TestBinaryMirror 0.58
27 TestOffline 103.09
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
32 TestAddons/Setup 158.31
34 TestAddons/parallel/Registry 15.87
35 TestAddons/parallel/Ingress 26.35
36 TestAddons/parallel/InspektorGadget 10.84
37 TestAddons/parallel/MetricsServer 6.17
38 TestAddons/parallel/HelmTiller 12.4
40 TestAddons/parallel/CSI 82.49
41 TestAddons/parallel/Headlamp 14.19
42 TestAddons/parallel/CloudSpanner 5.76
43 TestAddons/parallel/LocalPath 56.88
44 TestAddons/parallel/NvidiaDevicePlugin 5.51
47 TestAddons/serial/GCPAuth/Namespaces 0.12
48 TestAddons/StoppedEnableDisable 13.42
49 TestCertOptions 70.04
50 TestCertExpiration 337.42
51 TestDockerFlags 113.2
52 TestForceSystemdFlag 58.11
53 TestForceSystemdEnv 64.98
55 TestKVMDriverInstallOrUpdate 3.44
59 TestErrorSpam/setup 48.72
60 TestErrorSpam/start 0.39
61 TestErrorSpam/status 0.81
62 TestErrorSpam/pause 1.26
63 TestErrorSpam/unpause 1.33
64 TestErrorSpam/stop 13.27
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 65.68
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 39.27
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.37
76 TestFunctional/serial/CacheCmd/cache/add_local 1.32
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.33
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 42.45
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.14
87 TestFunctional/serial/LogsFileCmd 1.13
88 TestFunctional/serial/InvalidService 5.18
90 TestFunctional/parallel/ConfigCmd 0.44
91 TestFunctional/parallel/DashboardCmd 16.61
92 TestFunctional/parallel/DryRun 0.31
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 1.03
98 TestFunctional/parallel/ServiceCmdConnect 21.55
99 TestFunctional/parallel/AddonsCmd 0.17
100 TestFunctional/parallel/PersistentVolumeClaim 58.02
102 TestFunctional/parallel/SSHCmd 0.52
103 TestFunctional/parallel/CpCmd 1.11
104 TestFunctional/parallel/MySQL 35.6
105 TestFunctional/parallel/FileSync 0.25
106 TestFunctional/parallel/CertSync 1.57
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.26
114 TestFunctional/parallel/License 0.16
115 TestFunctional/parallel/DockerEnv/bash 1.11
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
123 TestFunctional/parallel/ImageCommands/ImageBuild 3.92
124 TestFunctional/parallel/ImageCommands/Setup 1.35
125 TestFunctional/parallel/ServiceCmd/DeployApp 32.19
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.92
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.5
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.31
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.65
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.79
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.89
142 TestFunctional/parallel/ServiceCmd/List 0.53
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
145 TestFunctional/parallel/ServiceCmd/Format 0.33
146 TestFunctional/parallel/ServiceCmd/URL 0.33
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
148 TestFunctional/parallel/MountCmd/any-port 7.83
149 TestFunctional/parallel/ProfileCmd/profile_list 0.37
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
151 TestFunctional/parallel/Version/short 0.07
152 TestFunctional/parallel/Version/components 0.75
153 TestFunctional/parallel/MountCmd/specific-port 2.11
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
155 TestFunctional/delete_addon-resizer_images 0.06
156 TestFunctional/delete_my-image_image 0.01
157 TestFunctional/delete_minikube_cached_images 0.02
158 TestGvisorAddon 198.01
161 TestImageBuild/serial/Setup 50.85
162 TestImageBuild/serial/NormalBuild 1.68
163 TestImageBuild/serial/BuildWithBuildArg 1.42
164 TestImageBuild/serial/BuildWithDockerIgnore 0.43
165 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.29
168 TestIngressAddonLegacy/StartLegacyK8sCluster 80.02
170 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.46
171 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.52
172 TestIngressAddonLegacy/serial/ValidateIngressAddons 43.88
175 TestJSONOutput/start/Command 106.47
176 TestJSONOutput/start/Audit 0
178 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Command 0.6
182 TestJSONOutput/pause/Audit 0
184 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Command 0.56
188 TestJSONOutput/unpause/Audit 0
190 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/stop/Command 13.12
194 TestJSONOutput/stop/Audit 0
196 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
198 TestErrorJSONOutput 0.22
203 TestMainNoArgs 0.06
204 TestMinikubeProfile 105.37
207 TestMountStart/serial/StartWithMountFirst 29.72
208 TestMountStart/serial/VerifyMountFirst 0.39
209 TestMountStart/serial/StartWithMountSecond 31.33
210 TestMountStart/serial/VerifyMountSecond 0.42
211 TestMountStart/serial/DeleteFirst 0.67
212 TestMountStart/serial/VerifyMountPostDelete 0.43
213 TestMountStart/serial/Stop 2.22
214 TestMountStart/serial/RestartStopped 24
215 TestMountStart/serial/VerifyMountPostStop 0.4
218 TestMultiNode/serial/FreshStart2Nodes 130.6
219 TestMultiNode/serial/DeployApp2Nodes 4.96
220 TestMultiNode/serial/PingHostFrom2Pods 0.95
221 TestMultiNode/serial/AddNode 45.93
222 TestMultiNode/serial/MultiNodeLabels 0.06
223 TestMultiNode/serial/ProfileList 0.22
224 TestMultiNode/serial/CopyFile 7.91
225 TestMultiNode/serial/StopNode 3.99
226 TestMultiNode/serial/StartAfterStop 31.36
227 TestMultiNode/serial/RestartKeepsNodes 171.03
228 TestMultiNode/serial/DeleteNode 1.77
229 TestMultiNode/serial/StopMultiNode 25.68
231 TestMultiNode/serial/ValidateNameConflict 52.65
236 TestPreload 181.46
238 TestScheduledStopUnix 122.94
239 TestSkaffold 142.1
242 TestRunningBinaryUpgrade 171.22
244 TestKubernetesUpgrade 272.12
257 TestStoppedBinaryUpgrade/Setup 0.58
258 TestStoppedBinaryUpgrade/Upgrade 212.18
260 TestPause/serial/Start 105.07
261 TestStoppedBinaryUpgrade/MinikubeLogs 1.35
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
271 TestNoKubernetes/serial/StartWithK8s 85.7
272 TestPause/serial/SecondStartNoReconfiguration 80.42
273 TestNoKubernetes/serial/StartWithStopK8s 35.17
274 TestNoKubernetes/serial/Start 30.94
275 TestPause/serial/Pause 0.64
276 TestPause/serial/VerifyStatus 0.28
277 TestPause/serial/Unpause 0.63
278 TestPause/serial/PauseAgain 0.81
279 TestPause/serial/DeletePaused 1.12
280 TestPause/serial/VerifyDeletedResources 0.38
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
282 TestNetworkPlugins/group/auto/Start 95.29
283 TestNoKubernetes/serial/ProfileList 0.81
284 TestNoKubernetes/serial/Stop 2.25
285 TestNoKubernetes/serial/StartNoArgs 74.57
286 TestNetworkPlugins/group/kindnet/Start 96.48
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
288 TestNetworkPlugins/group/calico/Start 130.65
289 TestNetworkPlugins/group/auto/KubeletFlags 0.22
290 TestNetworkPlugins/group/auto/NetCatPod 12.34
291 TestNetworkPlugins/group/auto/DNS 0.23
292 TestNetworkPlugins/group/auto/Localhost 0.19
293 TestNetworkPlugins/group/auto/HairPin 0.22
294 TestNetworkPlugins/group/custom-flannel/Start 88.1
295 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
296 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
297 TestNetworkPlugins/group/kindnet/NetCatPod 15.48
298 TestNetworkPlugins/group/kindnet/DNS 0.26
299 TestNetworkPlugins/group/kindnet/Localhost 0.23
300 TestNetworkPlugins/group/kindnet/HairPin 0.26
301 TestNetworkPlugins/group/false/Start 82.63
302 TestNetworkPlugins/group/enable-default-cni/Start 132.09
303 TestNetworkPlugins/group/calico/ControllerPod 5.03
304 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
305 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.32
306 TestNetworkPlugins/group/calico/KubeletFlags 0.24
307 TestNetworkPlugins/group/calico/NetCatPod 12.43
308 TestNetworkPlugins/group/custom-flannel/DNS 0.26
309 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
310 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
311 TestNetworkPlugins/group/calico/DNS 0.5
312 TestNetworkPlugins/group/calico/Localhost 0.26
313 TestNetworkPlugins/group/calico/HairPin 0.29
314 TestNetworkPlugins/group/flannel/Start 90.61
316 TestNetworkPlugins/group/false/KubeletFlags 0.25
317 TestNetworkPlugins/group/false/NetCatPod 12.34
318 TestNetworkPlugins/group/false/DNS 0.25
319 TestNetworkPlugins/group/false/Localhost 0.22
320 TestNetworkPlugins/group/false/HairPin 0.22
321 TestNetworkPlugins/group/kubenet/Start 110.22
323 TestStartStop/group/old-k8s-version/serial/FirstStart 146.8
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.35
326 TestNetworkPlugins/group/flannel/ControllerPod 5.03
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.49
328 TestNetworkPlugins/group/flannel/NetCatPod 12.73
329 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
330 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
331 TestNetworkPlugins/group/enable-default-cni/HairPin 0.25
332 TestNetworkPlugins/group/flannel/DNS 0.2
333 TestNetworkPlugins/group/flannel/Localhost 0.17
334 TestNetworkPlugins/group/flannel/HairPin 0.17
336 TestStartStop/group/no-preload/serial/FirstStart 101.16
338 TestStartStop/group/embed-certs/serial/FirstStart 101.28
339 TestNetworkPlugins/group/kubenet/KubeletFlags 0.29
340 TestNetworkPlugins/group/kubenet/NetCatPod 13.38
341 TestNetworkPlugins/group/kubenet/DNS 0.21
342 TestNetworkPlugins/group/kubenet/Localhost 0.16
343 TestNetworkPlugins/group/kubenet/HairPin 0.17
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 76.92
346 TestStartStop/group/no-preload/serial/DeployApp 9.98
347 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
349 TestStartStop/group/no-preload/serial/Stop 13.16
350 TestStartStop/group/embed-certs/serial/DeployApp 10.51
351 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.65
352 TestStartStop/group/old-k8s-version/serial/Stop 13.16
353 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.23
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
355 TestStartStop/group/no-preload/serial/SecondStart 338.33
356 TestStartStop/group/embed-certs/serial/Stop 13.14
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
358 TestStartStop/group/old-k8s-version/serial/SecondStart 478.23
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
360 TestStartStop/group/embed-certs/serial/SecondStart 347.3
361 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.49
362 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
363 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.13
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
365 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 355.77
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
367 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
368 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
369 TestStartStop/group/no-preload/serial/Pause 2.78
371 TestStartStop/group/newest-cni/serial/FirstStart 78.2
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
373 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
374 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
375 TestStartStop/group/embed-certs/serial/Pause 2.86
376 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
377 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
378 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
379 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.75
380 TestStartStop/group/newest-cni/serial/DeployApp 0
381 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.94
382 TestStartStop/group/newest-cni/serial/Stop 8.12
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
384 TestStartStop/group/newest-cni/serial/SecondStart 48.36
385 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
389 TestStartStop/group/newest-cni/serial/Pause 2.48
390 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
391 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
392 TestStartStop/group/old-k8s-version/serial/Pause 2.45
x
+
TestDownloadOnly/v1.16.0/json-events (17.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-034111 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-034111 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (17.278833094s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-034111
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-034111: exit status 85 (82.297195ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-034111 | jenkins | v1.32.0 | 12 Dec 23 00:10 UTC |          |
	|         | -p download-only-034111        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:10:44
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:10:44.968575   87621 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:10:44.968861   87621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:10:44.968871   87621 out.go:309] Setting ErrFile to fd 2...
	I1212 00:10:44.968877   87621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:10:44.969097   87621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
	W1212 00:10:44.969216   87621 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17764-80294/.minikube/config/config.json: open /home/jenkins/minikube-integration/17764-80294/.minikube/config/config.json: no such file or directory
	I1212 00:10:44.969777   87621 out.go:303] Setting JSON to true
	I1212 00:10:44.970738   87621 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10345,"bootTime":1702329500,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:10:44.970802   87621 start.go:138] virtualization: kvm guest
	I1212 00:10:44.973707   87621 out.go:97] [download-only-034111] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:10:44.973848   87621 notify.go:220] Checking for updates...
	W1212 00:10:44.973843   87621 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 00:10:44.975468   87621 out.go:169] MINIKUBE_LOCATION=17764
	I1212 00:10:44.977170   87621 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:10:44.978701   87621 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:10:44.980197   87621 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	I1212 00:10:44.981664   87621 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 00:10:44.984480   87621 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 00:10:44.984793   87621 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:10:45.021374   87621 out.go:97] Using the kvm2 driver based on user configuration
	I1212 00:10:45.021430   87621 start.go:298] selected driver: kvm2
	I1212 00:10:45.021436   87621 start.go:902] validating driver "kvm2" against <nil>
	I1212 00:10:45.021870   87621 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:10:45.021973   87621 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17764-80294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:10:45.036939   87621 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 00:10:45.036992   87621 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 00:10:45.037501   87621 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1212 00:10:45.037636   87621 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 00:10:45.037691   87621 cni.go:84] Creating CNI manager for ""
	I1212 00:10:45.037726   87621 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1212 00:10:45.037737   87621 start_flags.go:323] config:
	{Name:download-only-034111 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-034111 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:10:45.038000   87621 iso.go:125] acquiring lock: {Name:mk9f395cbf4246894893bf64341667bb412992c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:10:45.040022   87621 out.go:97] Downloading VM boot image ...
	I1212 00:10:45.040059   87621 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17764-80294/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 00:10:50.483429   87621 out.go:97] Starting control plane node download-only-034111 in cluster download-only-034111
	I1212 00:10:50.483476   87621 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 00:10:50.510411   87621 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1212 00:10:50.510447   87621 cache.go:56] Caching tarball of preloaded images
	I1212 00:10:50.510702   87621 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 00:10:50.512660   87621 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1212 00:10:50.512686   87621 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 00:10:50.546056   87621 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1212 00:10:55.641844   87621 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 00:10:55.641938   87621 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 00:10:56.369303   87621 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1212 00:10:56.369666   87621 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/download-only-034111/config.json ...
	I1212 00:10:56.369696   87621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/download-only-034111/config.json: {Name:mk449afbbb840be03c11f4907864872d1c8695a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:10:56.369859   87621 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 00:10:56.370052   87621 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17764-80294/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-034111"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-034111 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-034111 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=kvm2 : (4.309673529s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-034111
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-034111: exit status 85 (79.344634ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-034111 | jenkins | v1.32.0 | 12 Dec 23 00:10 UTC |          |
	|         | -p download-only-034111        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-034111 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |          |
	|         | -p download-only-034111        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:11:02
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:11:02.333569   87700 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:11:02.333831   87700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:02.333839   87700 out.go:309] Setting ErrFile to fd 2...
	I1212 00:11:02.333843   87700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:02.334044   87700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
	W1212 00:11:02.334178   87700 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17764-80294/.minikube/config/config.json: open /home/jenkins/minikube-integration/17764-80294/.minikube/config/config.json: no such file or directory
	I1212 00:11:02.334635   87700 out.go:303] Setting JSON to true
	I1212 00:11:02.335494   87700 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10362,"bootTime":1702329500,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:11:02.335562   87700 start.go:138] virtualization: kvm guest
	I1212 00:11:02.337894   87700 out.go:97] [download-only-034111] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:11:02.339978   87700 out.go:169] MINIKUBE_LOCATION=17764
	I1212 00:11:02.338157   87700 notify.go:220] Checking for updates...
	I1212 00:11:02.343480   87700 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:11:02.345264   87700 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:11:02.347038   87700 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	I1212 00:11:02.348590   87700 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-034111"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (10.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-034111 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-034111 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=kvm2 : (10.831039348s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (10.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-034111
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-034111: exit status 85 (76.542748ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-034111 | jenkins | v1.32.0 | 12 Dec 23 00:10 UTC |          |
	|         | -p download-only-034111           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-034111 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |          |
	|         | -p download-only-034111           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-034111 | jenkins | v1.32.0 | 12 Dec 23 00:11 UTC |          |
	|         | -p download-only-034111           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 00:11:06
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:11:06.719770   87745 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:11:06.719889   87745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:06.719897   87745 out.go:309] Setting ErrFile to fd 2...
	I1212 00:11:06.719902   87745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:11:06.720093   87745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
	W1212 00:11:06.720199   87745 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17764-80294/.minikube/config/config.json: open /home/jenkins/minikube-integration/17764-80294/.minikube/config/config.json: no such file or directory
	I1212 00:11:06.720626   87745 out.go:303] Setting JSON to true
	I1212 00:11:06.721386   87745 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10367,"bootTime":1702329500,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:11:06.721449   87745 start.go:138] virtualization: kvm guest
	I1212 00:11:06.723479   87745 out.go:97] [download-only-034111] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:11:06.725060   87745 out.go:169] MINIKUBE_LOCATION=17764
	I1212 00:11:06.723632   87745 notify.go:220] Checking for updates...
	I1212 00:11:06.728107   87745 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:11:06.729437   87745 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:11:06.730899   87745 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	I1212 00:11:06.732482   87745 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 00:11:06.735037   87745 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 00:11:06.735668   87745 config.go:182] Loaded profile config "download-only-034111": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1212 00:11:06.735733   87745 start.go:810] api.Load failed for download-only-034111: filestore "download-only-034111": Docker machine "download-only-034111" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 00:11:06.735872   87745 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 00:11:06.735929   87745 start.go:810] api.Load failed for download-only-034111: filestore "download-only-034111": Docker machine "download-only-034111" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 00:11:06.767874   87745 out.go:97] Using the kvm2 driver based on existing profile
	I1212 00:11:06.767907   87745 start.go:298] selected driver: kvm2
	I1212 00:11:06.767931   87745 start.go:902] validating driver "kvm2" against &{Name:download-only-034111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-034111 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:11:06.768372   87745 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:11:06.768439   87745 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17764-80294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:11:06.782360   87745 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 00:11:06.783122   87745 cni.go:84] Creating CNI manager for ""
	I1212 00:11:06.783148   87745 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 00:11:06.783166   87745 start_flags.go:323] config:
	{Name:download-only-034111 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-034111 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:11:06.783353   87745 iso.go:125] acquiring lock: {Name:mk9f395cbf4246894893bf64341667bb412992c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:11:06.785121   87745 out.go:97] Starting control plane node download-only-034111 in cluster download-only-034111
	I1212 00:11:06.785137   87745 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 00:11:06.808665   87745 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1212 00:11:06.808696   87745 cache.go:56] Caching tarball of preloaded images
	I1212 00:11:06.808848   87745 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 00:11:06.810643   87745 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1212 00:11:06.810662   87745 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 00:11:06.835721   87745 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:d472e9d5f1548dd0d68eb75b714c5436 -> /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1212 00:11:11.640630   87745 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 00:11:11.640727   87745 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 00:11:12.360894   87745 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I1212 00:11:12.361037   87745 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/download-only-034111/config.json ...
	I1212 00:11:12.361245   87745 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 00:11:12.361465   87745 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17764-80294/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-034111"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-034111
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-541580 --alsologtostderr --binary-mirror http://127.0.0.1:34319 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-541580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-541580
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (103.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-946366 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-946366 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m41.926903898s)
helpers_test.go:175: Cleaning up "offline-docker-946366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-946366
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-946366: (1.163494472s)
--- PASS: TestOffline (103.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-018377
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-018377: exit status 85 (65.005066ms)

                                                
                                                
-- stdout --
	* Profile "addons-018377" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-018377"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-018377
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-018377: exit status 85 (65.587589ms)

                                                
                                                
-- stdout --
	* Profile "addons-018377" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-018377"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (158.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-018377 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-018377 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m38.313571068s)
--- PASS: TestAddons/Setup (158.31s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 18.98265ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qmbqx" [34b798da-d076-41a5-bcf2-542e90b93f8c] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.023212058s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2nljw" [2d3b3701-ded9-498e-8131-0923349a9dff] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.020073506s
addons_test.go:339: (dbg) Run:  kubectl --context addons-018377 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-018377 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-018377 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.066751162s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-018377 ip
2023/12/12 00:14:12 [DEBUG] GET http://192.168.39.179:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-018377 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.87s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (26.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-018377 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-018377 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-018377 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9f2e8575-e02c-4ef7-9bb6-7b25314d2900] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9f2e8575-e02c-4ef7-9bb6-7b25314d2900] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.018303159s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-018377 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-018377 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-018377 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.179
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-018377 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-018377 addons disable ingress-dns --alsologtostderr -v=1: (1.083141804s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-018377 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-018377 addons disable ingress --alsologtostderr -v=1: (7.742639956s)
--- PASS: TestAddons/parallel/Ingress (26.35s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jsd9h" [2a787f1b-2b8b-4cc2-a37e-80bf7a287677] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012657986s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-018377
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-018377: (5.830746269s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.17s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 19.090193ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-kmb5l" [0e962373-62b5-485b-b6db-061a81c0cfae] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.027523806s
addons_test.go:414: (dbg) Run:  kubectl --context addons-018377 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-018377 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Done: out/minikube-linux-amd64 -p addons-018377 addons disable metrics-server --alsologtostderr -v=1: (1.04118762s)
--- PASS: TestAddons/parallel/MetricsServer (6.17s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.4s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 19.076621ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-987d9" [26e5ab6d-04cf-47d4-9f8f-9164684dec2f] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.02409781s
addons_test.go:472: (dbg) Run:  kubectl --context addons-018377 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-018377 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.739686984s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-018377 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (82.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 24.792521ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-018377 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-018377 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2a08c36f-dd5e-4945-8b75-d1c8f21fd3ec] Pending
helpers_test.go:344: "task-pv-pod" [2a08c36f-dd5e-4945-8b75-d1c8f21fd3ec] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2a08c36f-dd5e-4945-8b75-d1c8f21fd3ec] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.029748372s
addons_test.go:583: (dbg) Run:  kubectl --context addons-018377 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-018377 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-018377 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-018377 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-018377 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-018377 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-018377 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-018377 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [63b4d277-32f4-41c5-9949-d0be7e50bbfa] Pending
helpers_test.go:344: "task-pv-pod-restore" [63b4d277-32f4-41c5-9949-d0be7e50bbfa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [63b4d277-32f4-41c5-9949-d0be7e50bbfa] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.015658744s
addons_test.go:625: (dbg) Run:  kubectl --context addons-018377 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-018377 delete pod task-pv-pod-restore: (1.221175551s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-018377 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-018377 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-018377 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-018377 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.867262385s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-018377 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (82.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-018377 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-018377 --alsologtostderr -v=1: (1.174694254s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-jnkps" [be9d724a-bd79-431c-b355-461e43fec318] Pending
helpers_test.go:344: "headlamp-777fd4b855-jnkps" [be9d724a-bd79-431c-b355-461e43fec318] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-jnkps" [be9d724a-bd79-431c-b355-461e43fec318] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.018318568s
--- PASS: TestAddons/parallel/Headlamp (14.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-8nb7c" [b19ed419-a3e2-42d9-adce-1af575692101] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012092398s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-018377
--- PASS: TestAddons/parallel/CloudSpanner (5.76s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.88s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-018377 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-018377 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018377 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4f61c853-319e-44bb-a06b-569c06478a6f] Pending
helpers_test.go:344: "test-local-path" [4f61c853-319e-44bb-a06b-569c06478a6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4f61c853-319e-44bb-a06b-569c06478a6f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4f61c853-319e-44bb-a06b-569c06478a6f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.015171195s
addons_test.go:890: (dbg) Run:  kubectl --context addons-018377 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-018377 ssh "cat /opt/local-path-provisioner/pvc-78d4a9bd-cbc7-4e5a-9d06-7656796057f0_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-018377 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-018377 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-018377 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-018377 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.282773596s)
--- PASS: TestAddons/parallel/LocalPath (56.88s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vhn27" [3a9cad4c-8e64-4fe8-aada-f7b712113726] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.014907627s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-018377
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-018377 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-018377 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-018377
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-018377: (13.109800522s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-018377
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-018377
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-018377
--- PASS: TestAddons/StoppedEnableDisable (13.42s)

                                                
                                    
x
+
TestCertOptions (70.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-556256 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-556256 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m8.023121655s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-556256 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-556256 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-556256 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-556256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-556256
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-556256: (1.292863681s)
--- PASS: TestCertOptions (70.04s)

                                                
                                    
x
+
TestCertExpiration (337.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-279208 --memory=2048 --cert-expiration=3m --driver=kvm2 
E1212 00:50:57.854420   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 00:50:57.859723   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 00:50:57.870034   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 00:50:57.890342   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 00:50:57.930615   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 00:50:58.010916   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 00:50:58.171323   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 00:50:58.491971   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 00:50:59.132983   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 00:51:00.413433   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-279208 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m46.74595899s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-279208 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E1212 00:55:51.656586   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-279208 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (49.42708706s)
helpers_test.go:175: Cleaning up "cert-expiration-279208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-279208
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-279208: (1.245776513s)
--- PASS: TestCertExpiration (337.42s)

                                                
                                    
x
+
TestDockerFlags (113.2s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-111957 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E1212 00:51:18.336582   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 00:51:38.816793   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-111957 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m51.662030617s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-111957 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-111957 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-111957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-111957
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-111957: (1.076481037s)
--- PASS: TestDockerFlags (113.20s)

                                                
                                    
x
+
TestForceSystemdFlag (58.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-016291 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-016291 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (56.698859301s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-016291 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-016291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-016291
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-016291: (1.115603751s)
--- PASS: TestForceSystemdFlag (58.11s)

                                                
                                    
x
+
TestForceSystemdEnv (64.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-792350 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-792350 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m3.876099938s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-792350 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-792350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-792350
--- PASS: TestForceSystemdEnv (64.98s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.44s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.44s)

                                                
                                    
x
+
TestErrorSpam/setup (48.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-788444 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-788444 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-788444 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-788444 --driver=kvm2 : (48.714880838s)
--- PASS: TestErrorSpam/setup (48.72s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.26s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 pause
--- PASS: TestErrorSpam/pause (1.26s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 unpause
--- PASS: TestErrorSpam/unpause (1.33s)

                                                
                                    
x
+
TestErrorSpam/stop (13.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 stop: (13.10812656s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-788444 --log_dir /tmp/nospam-788444 stop
--- PASS: TestErrorSpam/stop (13.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/test/nested/copy/87609/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-289946 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-289946 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m5.684575867s)
--- PASS: TestFunctional/serial/StartWithProxy (65.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-289946 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-289946 --alsologtostderr -v=8: (39.27385013s)
functional_test.go:659: soft start took 39.274589059s for "functional-289946" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-289946 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-289946 /tmp/TestFunctionalserialCacheCmdcacheadd_local3148815741/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 cache add minikube-local-cache-test:functional-289946
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 cache delete minikube-local-cache-test:functional-289946
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-289946
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-289946 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (258.230473ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 kubectl -- --context functional-289946 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-289946 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-289946 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 00:18:56.975078   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:18:56.980790   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:18:56.991031   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:18:57.011292   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:18:57.051576   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:18:57.131938   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:18:57.292434   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:18:57.613065   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:18:58.254034   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:18:59.534543   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:19:02.096434   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:19:07.216919   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:19:17.457110   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-289946 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.447840304s)
functional_test.go:757: restart took 42.447977655s for "functional-289946" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-289946 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-289946 logs: (1.143373283s)
--- PASS: TestFunctional/serial/LogsCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 logs --file /tmp/TestFunctionalserialLogsFileCmd897564785/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-289946 logs --file /tmp/TestFunctionalserialLogsFileCmd897564785/001/logs.txt: (1.126111528s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-289946 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-289946
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-289946: exit status 115 (310.15873ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.33:32171 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-289946 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-289946 delete -f testdata/invalidsvc.yaml: (1.615399241s)
--- PASS: TestFunctional/serial/InvalidService (5.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-289946 config get cpus: exit status 14 (66.315805ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-289946 config get cpus: exit status 14 (68.57849ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-289946 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-289946 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 95183: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.61s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-289946 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-289946 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (155.189748ms)

                                                
                                                
-- stdout --
	* [functional-289946] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:20:14.794616   94955 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:20:14.794971   94955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:20:14.794983   94955 out.go:309] Setting ErrFile to fd 2...
	I1212 00:20:14.794990   94955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:20:14.795172   94955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
	I1212 00:20:14.795728   94955 out.go:303] Setting JSON to false
	I1212 00:20:14.796688   94955 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10915,"bootTime":1702329500,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:20:14.796763   94955 start.go:138] virtualization: kvm guest
	I1212 00:20:14.799014   94955 out.go:177] * [functional-289946] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:20:14.800994   94955 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:20:14.801003   94955 notify.go:220] Checking for updates...
	I1212 00:20:14.802535   94955 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:20:14.804267   94955 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:20:14.805772   94955 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	I1212 00:20:14.807211   94955 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:20:14.808779   94955 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:20:14.810813   94955 config.go:182] Loaded profile config "functional-289946": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:20:14.811486   94955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:20:14.811542   94955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:20:14.826653   94955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I1212 00:20:14.827128   94955 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:20:14.827730   94955 main.go:141] libmachine: Using API Version  1
	I1212 00:20:14.827751   94955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:20:14.828270   94955 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:20:14.829254   94955 main.go:141] libmachine: (functional-289946) Calling .DriverName
	I1212 00:20:14.829554   94955 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:20:14.829854   94955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:20:14.829902   94955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:20:14.845187   94955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I1212 00:20:14.845564   94955 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:20:14.846026   94955 main.go:141] libmachine: Using API Version  1
	I1212 00:20:14.846052   94955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:20:14.846439   94955 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:20:14.846606   94955 main.go:141] libmachine: (functional-289946) Calling .DriverName
	I1212 00:20:14.881380   94955 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:20:14.882919   94955 start.go:298] selected driver: kvm2
	I1212 00:20:14.882938   94955 start.go:902] validating driver "kvm2" against &{Name:functional-289946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-289946 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.33 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:20:14.883095   94955 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:20:14.885331   94955 out.go:177] 
	W1212 00:20:14.886683   94955 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 00:20:14.887977   94955 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-289946 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-289946 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-289946 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (153.898541ms)

                                                
                                                
-- stdout --
	* [functional-289946] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:20:15.110996   95040 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:20:15.111119   95040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:20:15.111126   95040 out.go:309] Setting ErrFile to fd 2...
	I1212 00:20:15.111133   95040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:20:15.111455   95040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
	I1212 00:20:15.112043   95040 out.go:303] Setting JSON to false
	I1212 00:20:15.113005   95040 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10915,"bootTime":1702329500,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:20:15.113067   95040 start.go:138] virtualization: kvm guest
	I1212 00:20:15.115162   95040 out.go:177] * [functional-289946] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1212 00:20:15.116618   95040 out.go:177]   - MINIKUBE_LOCATION=17764
	I1212 00:20:15.116616   95040 notify.go:220] Checking for updates...
	I1212 00:20:15.118091   95040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:20:15.119906   95040 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	I1212 00:20:15.121480   95040 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	I1212 00:20:15.123035   95040 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:20:15.124571   95040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:20:15.126397   95040 config.go:182] Loaded profile config "functional-289946": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:20:15.126881   95040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:20:15.126932   95040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:20:15.142359   95040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I1212 00:20:15.142772   95040 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:20:15.143403   95040 main.go:141] libmachine: Using API Version  1
	I1212 00:20:15.143426   95040 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:20:15.143900   95040 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:20:15.144131   95040 main.go:141] libmachine: (functional-289946) Calling .DriverName
	I1212 00:20:15.144391   95040 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 00:20:15.144798   95040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:20:15.144847   95040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:20:15.160366   95040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I1212 00:20:15.160740   95040 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:20:15.161212   95040 main.go:141] libmachine: Using API Version  1
	I1212 00:20:15.161230   95040 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:20:15.161526   95040 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:20:15.161680   95040 main.go:141] libmachine: (functional-289946) Calling .DriverName
	I1212 00:20:15.194856   95040 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1212 00:20:15.196352   95040 start.go:298] selected driver: kvm2
	I1212 00:20:15.196367   95040 start.go:902] validating driver "kvm2" against &{Name:functional-289946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-289946 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.33 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 00:20:15.196468   95040 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:20:15.198602   95040 out.go:177] 
	W1212 00:20:15.200063   95040 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 00:20:15.201355   95040 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-289946 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-289946 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-49df5" [1d9bc8e9-bc7b-46b9-88bb-1fa52dcfbef2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-49df5" [1d9bc8e9-bc7b-46b9-88bb-1fa52dcfbef2] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.030244182s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.33:30351
functional_test.go:1674: http://192.168.39.33:30351: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-49df5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.33:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.33:30351
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (58.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [451a1815-6ec9-4887-ba75-8f3486296b06] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01345435s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-289946 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-289946 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-289946 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-289946 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-289946 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bf0f43a2-7aae-4cf5-8093-ae02737f1b34] Pending
helpers_test.go:344: "sp-pod" [bf0f43a2-7aae-4cf5-8093-ae02737f1b34] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bf0f43a2-7aae-4cf5-8093-ae02737f1b34] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 34.01941451s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-289946 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-289946 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-289946 delete -f testdata/storage-provisioner/pod.yaml: (1.341702886s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-289946 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5405cad6-c900-4a08-b73b-4f7625b14ada] Pending
helpers_test.go:344: "sp-pod" [5405cad6-c900-4a08-b73b-4f7625b14ada] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5405cad6-c900-4a08-b73b-4f7625b14ada] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.017216299s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-289946 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (58.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh -n functional-289946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 cp functional-289946:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3541244248/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh -n functional-289946 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-289946 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-9pvgs" [c5df8691-f1b0-4e44-ad6c-56898668302c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-9pvgs" [c5df8691-f1b0-4e44-ad6c-56898668302c] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.029213883s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-289946 exec mysql-859648c796-9pvgs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-289946 exec mysql-859648c796-9pvgs -- mysql -ppassword -e "show databases;": exit status 1 (205.232966ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-289946 exec mysql-859648c796-9pvgs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-289946 exec mysql-859648c796-9pvgs -- mysql -ppassword -e "show databases;": exit status 1 (443.745259ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-289946 exec mysql-859648c796-9pvgs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-289946 exec mysql-859648c796-9pvgs -- mysql -ppassword -e "show databases;": exit status 1 (321.973765ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-289946 exec mysql-859648c796-9pvgs -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/87609/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "sudo cat /etc/test/nested/copy/87609/hosts"
E1212 00:19:37.937279   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/87609.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "sudo cat /etc/ssl/certs/87609.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/87609.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "sudo cat /usr/share/ca-certificates/87609.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/876092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "sudo cat /etc/ssl/certs/876092.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/876092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "sudo cat /usr/share/ca-certificates/876092.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-289946 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-289946 ssh "sudo systemctl is-active crio": exit status 1 (260.174844ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-289946 docker-env) && out/minikube-linux-amd64 status -p functional-289946"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-289946 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-289946 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-289946
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-289946
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-289946 image ls --format short --alsologtostderr:
I1212 00:20:21.620060   95411 out.go:296] Setting OutFile to fd 1 ...
I1212 00:20:21.620205   95411 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:20:21.620214   95411 out.go:309] Setting ErrFile to fd 2...
I1212 00:20:21.620219   95411 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:20:21.620429   95411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
I1212 00:20:21.621048   95411 config.go:182] Loaded profile config "functional-289946": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:20:21.621196   95411 config.go:182] Loaded profile config "functional-289946": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:20:21.621611   95411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:20:21.621660   95411 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:20:21.636077   95411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
I1212 00:20:21.636556   95411 main.go:141] libmachine: () Calling .GetVersion
I1212 00:20:21.637090   95411 main.go:141] libmachine: Using API Version  1
I1212 00:20:21.637113   95411 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:20:21.637438   95411 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:20:21.637635   95411 main.go:141] libmachine: (functional-289946) Calling .GetState
I1212 00:20:21.639470   95411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:20:21.639520   95411 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:20:21.653365   95411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41429
I1212 00:20:21.653758   95411 main.go:141] libmachine: () Calling .GetVersion
I1212 00:20:21.654188   95411 main.go:141] libmachine: Using API Version  1
I1212 00:20:21.654214   95411 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:20:21.654524   95411 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:20:21.654751   95411 main.go:141] libmachine: (functional-289946) Calling .DriverName
I1212 00:20:21.654972   95411 ssh_runner.go:195] Run: systemctl --version
I1212 00:20:21.655002   95411 main.go:141] libmachine: (functional-289946) Calling .GetSSHHostname
I1212 00:20:21.657810   95411 main.go:141] libmachine: (functional-289946) DBG | domain functional-289946 has defined MAC address 52:54:00:d9:30:7b in network mk-functional-289946
I1212 00:20:21.658258   95411 main.go:141] libmachine: (functional-289946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:30:7b", ip: ""} in network mk-functional-289946: {Iface:virbr1 ExpiryTime:2023-12-12 01:17:12 +0000 UTC Type:0 Mac:52:54:00:d9:30:7b Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:functional-289946 Clientid:01:52:54:00:d9:30:7b}
I1212 00:20:21.658280   95411 main.go:141] libmachine: (functional-289946) DBG | domain functional-289946 has defined IP address 192.168.39.33 and MAC address 52:54:00:d9:30:7b in network mk-functional-289946
I1212 00:20:21.658445   95411 main.go:141] libmachine: (functional-289946) Calling .GetSSHPort
I1212 00:20:21.658609   95411 main.go:141] libmachine: (functional-289946) Calling .GetSSHKeyPath
I1212 00:20:21.658767   95411 main.go:141] libmachine: (functional-289946) Calling .GetSSHUsername
I1212 00:20:21.658894   95411 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/functional-289946/id_rsa Username:docker}
I1212 00:20:21.754060   95411 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1212 00:20:21.785693   95411 main.go:141] libmachine: Making call to close driver server
I1212 00:20:21.785708   95411 main.go:141] libmachine: (functional-289946) Calling .Close
I1212 00:20:21.786009   95411 main.go:141] libmachine: (functional-289946) DBG | Closing plugin on server side
I1212 00:20:21.786073   95411 main.go:141] libmachine: Successfully made call to close driver server
I1212 00:20:21.786098   95411 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 00:20:21.786110   95411 main.go:141] libmachine: Making call to close driver server
I1212 00:20:21.786129   95411 main.go:141] libmachine: (functional-289946) Calling .Close
I1212 00:20:21.786346   95411 main.go:141] libmachine: Successfully made call to close driver server
I1212 00:20:21.786379   95411 main.go:141] libmachine: (functional-289946) DBG | Closing plugin on server side
I1212 00:20:21.786384   95411 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-289946 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| docker.io/library/mysql                     | 5.7               | bdba757bc9336 | 501MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-289946 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/localhost/my-image                | functional-289946 | eda3f69adcb5d | 1.24MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-289946 | d0b74e565535f | 30B    |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/library/nginx                     | latest            | a6bd71f48f683 | 187MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-289946 image ls --format table --alsologtostderr:
I1212 00:20:25.980939   95918 out.go:296] Setting OutFile to fd 1 ...
I1212 00:20:25.981124   95918 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:20:25.981136   95918 out.go:309] Setting ErrFile to fd 2...
I1212 00:20:25.981143   95918 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:20:25.981448   95918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
I1212 00:20:25.982288   95918 config.go:182] Loaded profile config "functional-289946": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:20:25.982447   95918 config.go:182] Loaded profile config "functional-289946": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:20:25.983066   95918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:20:25.983121   95918 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:20:25.998393   95918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37727
I1212 00:20:25.998941   95918 main.go:141] libmachine: () Calling .GetVersion
I1212 00:20:25.999655   95918 main.go:141] libmachine: Using API Version  1
I1212 00:20:25.999677   95918 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:20:26.000142   95918 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:20:26.000365   95918 main.go:141] libmachine: (functional-289946) Calling .GetState
I1212 00:20:26.002365   95918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:20:26.002414   95918 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:20:26.016493   95918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
I1212 00:20:26.016913   95918 main.go:141] libmachine: () Calling .GetVersion
I1212 00:20:26.017437   95918 main.go:141] libmachine: Using API Version  1
I1212 00:20:26.017458   95918 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:20:26.017857   95918 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:20:26.018108   95918 main.go:141] libmachine: (functional-289946) Calling .DriverName
I1212 00:20:26.018350   95918 ssh_runner.go:195] Run: systemctl --version
I1212 00:20:26.018381   95918 main.go:141] libmachine: (functional-289946) Calling .GetSSHHostname
I1212 00:20:26.021233   95918 main.go:141] libmachine: (functional-289946) DBG | domain functional-289946 has defined MAC address 52:54:00:d9:30:7b in network mk-functional-289946
I1212 00:20:26.021626   95918 main.go:141] libmachine: (functional-289946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:30:7b", ip: ""} in network mk-functional-289946: {Iface:virbr1 ExpiryTime:2023-12-12 01:17:12 +0000 UTC Type:0 Mac:52:54:00:d9:30:7b Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:functional-289946 Clientid:01:52:54:00:d9:30:7b}
I1212 00:20:26.021666   95918 main.go:141] libmachine: (functional-289946) DBG | domain functional-289946 has defined IP address 192.168.39.33 and MAC address 52:54:00:d9:30:7b in network mk-functional-289946
I1212 00:20:26.021726   95918 main.go:141] libmachine: (functional-289946) Calling .GetSSHPort
I1212 00:20:26.021896   95918 main.go:141] libmachine: (functional-289946) Calling .GetSSHKeyPath
I1212 00:20:26.022049   95918 main.go:141] libmachine: (functional-289946) Calling .GetSSHUsername
I1212 00:20:26.022182   95918 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/functional-289946/id_rsa Username:docker}
I1212 00:20:26.126758   95918 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1212 00:20:26.190789   95918 main.go:141] libmachine: Making call to close driver server
I1212 00:20:26.190808   95918 main.go:141] libmachine: (functional-289946) Calling .Close
I1212 00:20:26.191141   95918 main.go:141] libmachine: (functional-289946) DBG | Closing plugin on server side
I1212 00:20:26.191139   95918 main.go:141] libmachine: Successfully made call to close driver server
I1212 00:20:26.191180   95918 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 00:20:26.191198   95918 main.go:141] libmachine: Making call to close driver server
I1212 00:20:26.191213   95918 main.go:141] libmachine: (functional-289946) Calling .Close
I1212 00:20:26.191626   95918 main.go:141] libmachine: Successfully made call to close driver server
I1212 00:20:26.191645   95918 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 00:20:26.191657   95918 main.go:141] libmachine: (functional-289946) DBG | Closing plugin on server side
2023/12/12 00:20:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-289946 image ls --format json --alsologtostderr:
[{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"eda3f69adcb5d7ce782b185c5ed84585c485480691e1bbcada4403ea758d186f","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-289946"],"size":"1240000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3bec
d41e094eb226076436f258c","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"d0b74e565535fb2b7b1b1ef866018501a561ee8c12b804aa66f3026628f13a20","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-289946"],"size":"30"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags"
:["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-289946"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-289946 image ls --format json --alsologtostderr:
I1212 00:20:25.740447   95871 out.go:296] Setting OutFile to fd 1 ...
I1212 00:20:25.740629   95871 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:20:25.740640   95871 out.go:309] Setting ErrFile to fd 2...
I1212 00:20:25.740648   95871 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:20:25.740949   95871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
I1212 00:20:25.741840   95871 config.go:182] Loaded profile config "functional-289946": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:20:25.742024   95871 config.go:182] Loaded profile config "functional-289946": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:20:25.742594   95871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:20:25.742669   95871 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:20:25.757711   95871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43809
I1212 00:20:25.758165   95871 main.go:141] libmachine: () Calling .GetVersion
I1212 00:20:25.758797   95871 main.go:141] libmachine: Using API Version  1
I1212 00:20:25.758827   95871 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:20:25.759230   95871 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:20:25.759434   95871 main.go:141] libmachine: (functional-289946) Calling .GetState
I1212 00:20:25.761507   95871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:20:25.761556   95871 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:20:25.776781   95871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
I1212 00:20:25.777261   95871 main.go:141] libmachine: () Calling .GetVersion
I1212 00:20:25.777840   95871 main.go:141] libmachine: Using API Version  1
I1212 00:20:25.777865   95871 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:20:25.778286   95871 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:20:25.778539   95871 main.go:141] libmachine: (functional-289946) Calling .DriverName
I1212 00:20:25.778802   95871 ssh_runner.go:195] Run: systemctl --version
I1212 00:20:25.778867   95871 main.go:141] libmachine: (functional-289946) Calling .GetSSHHostname
I1212 00:20:25.782260   95871 main.go:141] libmachine: (functional-289946) DBG | domain functional-289946 has defined MAC address 52:54:00:d9:30:7b in network mk-functional-289946
I1212 00:20:25.782710   95871 main.go:141] libmachine: (functional-289946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:30:7b", ip: ""} in network mk-functional-289946: {Iface:virbr1 ExpiryTime:2023-12-12 01:17:12 +0000 UTC Type:0 Mac:52:54:00:d9:30:7b Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:functional-289946 Clientid:01:52:54:00:d9:30:7b}
I1212 00:20:25.782744   95871 main.go:141] libmachine: (functional-289946) DBG | domain functional-289946 has defined IP address 192.168.39.33 and MAC address 52:54:00:d9:30:7b in network mk-functional-289946
I1212 00:20:25.783015   95871 main.go:141] libmachine: (functional-289946) Calling .GetSSHPort
I1212 00:20:25.783200   95871 main.go:141] libmachine: (functional-289946) Calling .GetSSHKeyPath
I1212 00:20:25.783364   95871 main.go:141] libmachine: (functional-289946) Calling .GetSSHUsername
I1212 00:20:25.783517   95871 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/functional-289946/id_rsa Username:docker}
I1212 00:20:25.881065   95871 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1212 00:20:25.908672   95871 main.go:141] libmachine: Making call to close driver server
I1212 00:20:25.908690   95871 main.go:141] libmachine: (functional-289946) Calling .Close
I1212 00:20:25.908949   95871 main.go:141] libmachine: Successfully made call to close driver server
I1212 00:20:25.908975   95871 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 00:20:25.908985   95871 main.go:141] libmachine: Making call to close driver server
I1212 00:20:25.908995   95871 main.go:141] libmachine: (functional-289946) Calling .Close
I1212 00:20:25.909010   95871 main.go:141] libmachine: (functional-289946) DBG | Closing plugin on server side
I1212 00:20:25.909248   95871 main.go:141] libmachine: Successfully made call to close driver server
I1212 00:20:25.909262   95871 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-289946 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: d0b74e565535fb2b7b1b1ef866018501a561ee8c12b804aa66f3026628f13a20
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-289946
size: "30"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-289946
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-289946 image ls --format yaml --alsologtostderr:
I1212 00:20:21.857230   95440 out.go:296] Setting OutFile to fd 1 ...
I1212 00:20:21.857414   95440 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:20:21.857445   95440 out.go:309] Setting ErrFile to fd 2...
I1212 00:20:21.857470   95440 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:20:21.857922   95440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
I1212 00:20:21.858637   95440 config.go:182] Loaded profile config "functional-289946": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:20:21.858788   95440 config.go:182] Loaded profile config "functional-289946": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:20:21.859299   95440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:20:21.859395   95440 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:20:21.878570   95440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45867
I1212 00:20:21.879355   95440 main.go:141] libmachine: () Calling .GetVersion
I1212 00:20:21.880155   95440 main.go:141] libmachine: Using API Version  1
I1212 00:20:21.880175   95440 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:20:21.880605   95440 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:20:21.880818   95440 main.go:141] libmachine: (functional-289946) Calling .GetState
I1212 00:20:21.882715   95440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:20:21.882757   95440 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:20:21.904005   95440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
I1212 00:20:21.904424   95440 main.go:141] libmachine: () Calling .GetVersion
I1212 00:20:21.904996   95440 main.go:141] libmachine: Using API Version  1
I1212 00:20:21.905015   95440 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:20:21.905448   95440 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:20:21.905655   95440 main.go:141] libmachine: (functional-289946) Calling .DriverName
I1212 00:20:21.905892   95440 ssh_runner.go:195] Run: systemctl --version
I1212 00:20:21.905920   95440 main.go:141] libmachine: (functional-289946) Calling .GetSSHHostname
I1212 00:20:21.911028   95440 main.go:141] libmachine: (functional-289946) DBG | domain functional-289946 has defined MAC address 52:54:00:d9:30:7b in network mk-functional-289946
I1212 00:20:21.911449   95440 main.go:141] libmachine: (functional-289946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:30:7b", ip: ""} in network mk-functional-289946: {Iface:virbr1 ExpiryTime:2023-12-12 01:17:12 +0000 UTC Type:0 Mac:52:54:00:d9:30:7b Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:functional-289946 Clientid:01:52:54:00:d9:30:7b}
I1212 00:20:21.911473   95440 main.go:141] libmachine: (functional-289946) DBG | domain functional-289946 has defined IP address 192.168.39.33 and MAC address 52:54:00:d9:30:7b in network mk-functional-289946
I1212 00:20:21.911627   95440 main.go:141] libmachine: (functional-289946) Calling .GetSSHPort
I1212 00:20:21.911785   95440 main.go:141] libmachine: (functional-289946) Calling .GetSSHKeyPath
I1212 00:20:21.911942   95440 main.go:141] libmachine: (functional-289946) Calling .GetSSHUsername
I1212 00:20:21.912077   95440 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/functional-289946/id_rsa Username:docker}
I1212 00:20:22.014525   95440 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1212 00:20:22.107004   95440 main.go:141] libmachine: Making call to close driver server
I1212 00:20:22.107023   95440 main.go:141] libmachine: (functional-289946) Calling .Close
I1212 00:20:22.107306   95440 main.go:141] libmachine: (functional-289946) DBG | Closing plugin on server side
I1212 00:20:22.107354   95440 main.go:141] libmachine: Successfully made call to close driver server
I1212 00:20:22.107364   95440 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 00:20:22.107380   95440 main.go:141] libmachine: Making call to close driver server
I1212 00:20:22.107390   95440 main.go:141] libmachine: (functional-289946) Calling .Close
I1212 00:20:22.107661   95440 main.go:141] libmachine: Successfully made call to close driver server
I1212 00:20:22.107691   95440 main.go:141] libmachine: (functional-289946) DBG | Closing plugin on server side
I1212 00:20:22.107744   95440 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-289946 ssh pgrep buildkitd: exit status 1 (255.497482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image build -t localhost/my-image:functional-289946 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-289946 image build -t localhost/my-image:functional-289946 testdata/build --alsologtostderr: (3.402561251s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-289946 image build -t localhost/my-image:functional-289946 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 530bf522564e
Removing intermediate container 530bf522564e
---> 72f8121317b2
Step 3/3 : ADD content.txt /
---> eda3f69adcb5
Successfully built eda3f69adcb5
Successfully tagged localhost/my-image:functional-289946
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-289946 image build -t localhost/my-image:functional-289946 testdata/build --alsologtostderr:
I1212 00:20:22.433324   95550 out.go:296] Setting OutFile to fd 1 ...
I1212 00:20:22.433450   95550 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:20:22.433458   95550 out.go:309] Setting ErrFile to fd 2...
I1212 00:20:22.433463   95550 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:20:22.433697   95550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
I1212 00:20:22.434356   95550 config.go:182] Loaded profile config "functional-289946": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:20:22.434892   95550 config.go:182] Loaded profile config "functional-289946": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:20:22.435334   95550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:20:22.435398   95550 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:20:22.450153   95550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35371
I1212 00:20:22.450623   95550 main.go:141] libmachine: () Calling .GetVersion
I1212 00:20:22.451185   95550 main.go:141] libmachine: Using API Version  1
I1212 00:20:22.451201   95550 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:20:22.451562   95550 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:20:22.451774   95550 main.go:141] libmachine: (functional-289946) Calling .GetState
I1212 00:20:22.453834   95550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:20:22.453876   95550 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:20:22.468411   95550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
I1212 00:20:22.468818   95550 main.go:141] libmachine: () Calling .GetVersion
I1212 00:20:22.469243   95550 main.go:141] libmachine: Using API Version  1
I1212 00:20:22.469267   95550 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:20:22.469596   95550 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:20:22.469785   95550 main.go:141] libmachine: (functional-289946) Calling .DriverName
I1212 00:20:22.469998   95550 ssh_runner.go:195] Run: systemctl --version
I1212 00:20:22.470046   95550 main.go:141] libmachine: (functional-289946) Calling .GetSSHHostname
I1212 00:20:22.472821   95550 main.go:141] libmachine: (functional-289946) DBG | domain functional-289946 has defined MAC address 52:54:00:d9:30:7b in network mk-functional-289946
I1212 00:20:22.473213   95550 main.go:141] libmachine: (functional-289946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:30:7b", ip: ""} in network mk-functional-289946: {Iface:virbr1 ExpiryTime:2023-12-12 01:17:12 +0000 UTC Type:0 Mac:52:54:00:d9:30:7b Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:functional-289946 Clientid:01:52:54:00:d9:30:7b}
I1212 00:20:22.473243   95550 main.go:141] libmachine: (functional-289946) DBG | domain functional-289946 has defined IP address 192.168.39.33 and MAC address 52:54:00:d9:30:7b in network mk-functional-289946
I1212 00:20:22.473372   95550 main.go:141] libmachine: (functional-289946) Calling .GetSSHPort
I1212 00:20:22.473515   95550 main.go:141] libmachine: (functional-289946) Calling .GetSSHKeyPath
I1212 00:20:22.473697   95550 main.go:141] libmachine: (functional-289946) Calling .GetSSHUsername
I1212 00:20:22.473850   95550 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/functional-289946/id_rsa Username:docker}
I1212 00:20:22.575542   95550 build_images.go:151] Building image from path: /tmp/build.113489063.tar
I1212 00:20:22.575615   95550 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 00:20:22.604158   95550 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.113489063.tar
I1212 00:20:22.610633   95550 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.113489063.tar: stat -c "%s %y" /var/lib/minikube/build/build.113489063.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.113489063.tar': No such file or directory
I1212 00:20:22.610663   95550 ssh_runner.go:362] scp /tmp/build.113489063.tar --> /var/lib/minikube/build/build.113489063.tar (3072 bytes)
I1212 00:20:22.667725   95550 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.113489063
I1212 00:20:22.678269   95550 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.113489063 -xf /var/lib/minikube/build/build.113489063.tar
I1212 00:20:22.690987   95550 docker.go:346] Building image: /var/lib/minikube/build/build.113489063
I1212 00:20:22.691048   95550 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-289946 /var/lib/minikube/build/build.113489063
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1212 00:20:25.744662   95550 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-289946 /var/lib/minikube/build/build.113489063: (3.053590456s)
I1212 00:20:25.744746   95550 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.113489063
I1212 00:20:25.754109   95550 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.113489063.tar
I1212 00:20:25.767433   95550 build_images.go:207] Built localhost/my-image:functional-289946 from /tmp/build.113489063.tar
I1212 00:20:25.767466   95550 build_images.go:123] succeeded building to: functional-289946
I1212 00:20:25.767472   95550 build_images.go:124] failed building to: 
I1212 00:20:25.767517   95550 main.go:141] libmachine: Making call to close driver server
I1212 00:20:25.767543   95550 main.go:141] libmachine: (functional-289946) Calling .Close
I1212 00:20:25.767808   95550 main.go:141] libmachine: Successfully made call to close driver server
I1212 00:20:25.767836   95550 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 00:20:25.767852   95550 main.go:141] libmachine: (functional-289946) DBG | Closing plugin on server side
I1212 00:20:25.767861   95550 main.go:141] libmachine: Making call to close driver server
I1212 00:20:25.767871   95550 main.go:141] libmachine: (functional-289946) Calling .Close
I1212 00:20:25.768112   95550 main.go:141] libmachine: (functional-289946) DBG | Closing plugin on server side
I1212 00:20:25.768153   95550 main.go:141] libmachine: Successfully made call to close driver server
I1212 00:20:25.768165   95550 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.326422644s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-289946
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (32.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-289946 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-289946 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-kg7q9" [59974f5b-f1fe-402d-b564-cb88ae3980ea] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-kg7q9" [59974f5b-f1fe-402d-b564-cb88ae3980ea] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 32.023283423s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (32.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image load --daemon gcr.io/google-containers/addon-resizer:functional-289946 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-289946 image load --daemon gcr.io/google-containers/addon-resizer:functional-289946 --alsologtostderr: (4.680039538s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image load --daemon gcr.io/google-containers/addon-resizer:functional-289946 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-289946 image load --daemon gcr.io/google-containers/addon-resizer:functional-289946 --alsologtostderr: (2.2753666s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.702898606s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-289946
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image load --daemon gcr.io/google-containers/addon-resizer:functional-289946 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-289946 image load --daemon gcr.io/google-containers/addon-resizer:functional-289946 --alsologtostderr: (4.370059033s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image save gcr.io/google-containers/addon-resizer:functional-289946 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-289946 image save gcr.io/google-containers/addon-resizer:functional-289946 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.648105674s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image rm gcr.io/google-containers/addon-resizer:functional-289946 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-289946 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.560006681s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-289946
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 image save --daemon gcr.io/google-containers/addon-resizer:functional-289946 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-289946 image save --daemon gcr.io/google-containers/addon-resizer:functional-289946 --alsologtostderr: (1.848291411s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-289946
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 service list -o json
functional_test.go:1493: Took "460.427768ms" to run "out/minikube-linux-amd64 -p functional-289946 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.33:32511
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.33:32511
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-289946 /tmp/TestFunctionalparallelMountCmdany-port3716881849/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702340413991185322" to /tmp/TestFunctionalparallelMountCmdany-port3716881849/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702340413991185322" to /tmp/TestFunctionalparallelMountCmdany-port3716881849/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702340413991185322" to /tmp/TestFunctionalparallelMountCmdany-port3716881849/001/test-1702340413991185322
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-289946 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.979918ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 00:20 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 00:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 00:20 test-1702340413991185322
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh cat /mount-9p/test-1702340413991185322
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-289946 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f3cc6d3d-5e83-45a0-ab68-6c348b39bdfe] Pending
helpers_test.go:344: "busybox-mount" [f3cc6d3d-5e83-45a0-ab68-6c348b39bdfe] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1212 00:20:18.898035   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [f3cc6d3d-5e83-45a0-ab68-6c348b39bdfe] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f3cc6d3d-5e83-45a0-ab68-6c348b39bdfe] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.020679206s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-289946 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-289946 /tmp/TestFunctionalparallelMountCmdany-port3716881849/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "306.833441ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "58.949335ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "222.512463ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "60.772205ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-289946 /tmp/TestFunctionalparallelMountCmdspecific-port902037852/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-289946 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.061824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-289946 /tmp/TestFunctionalparallelMountCmdspecific-port902037852/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-289946 ssh "sudo umount -f /mount-9p": exit status 1 (226.476318ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-289946 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-289946 /tmp/TestFunctionalparallelMountCmdspecific-port902037852/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-289946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1026142925/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-289946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1026142925/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-289946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1026142925/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-289946 ssh "findmnt -T" /mount1: exit status 1 (374.113115ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-289946 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-289946 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-289946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1026142925/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-289946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1026142925/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-289946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1026142925/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-289946
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-289946
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-289946
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (198.01s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-583949 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E1212 00:48:12.802797   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-583949 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m7.772289329s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-583949 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-583949 cache add gcr.io/k8s-minikube/gvisor-addon:2: (23.064718805s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-583949 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-583949 addons enable gvisor: (3.260872074s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [1ed0dc50-9cc7-4d24-a5a0-220e2edcff99] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.022995089s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-583949 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [79e7a618-9307-4c83-ac13-2e7a1995e3bc] Pending
helpers_test.go:344: "nginx-gvisor" [79e7a618-9307-4c83-ac13-2e7a1995e3bc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1212 00:49:38.390166   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
helpers_test.go:344: "nginx-gvisor" [79e7a618-9307-4c83-ac13-2e7a1995e3bc] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 16.026253547s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-583949
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-583949: (2.118010653s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-583949 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-583949 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m8.465352667s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [1ed0dc50-9cc7-4d24-a5a0-220e2edcff99] Running
E1212 00:51:02.974397   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.023491479s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [79e7a618-9307-4c83-ac13-2e7a1995e3bc] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1212 00:51:08.095439   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.014253931s
helpers_test.go:175: Cleaning up "gvisor-583949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-583949
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-583949: (1.901631197s)
--- PASS: TestGvisorAddon (198.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (50.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-988214 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-988214 --driver=kvm2 : (50.851153073s)
--- PASS: TestImageBuild/serial/Setup (50.85s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-988214
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-988214: (1.678290595s)
--- PASS: TestImageBuild/serial/NormalBuild (1.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-988214
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-988214: (1.421937584s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.42s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-988214
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-988214
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (80.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-071051 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E1212 00:21:40.819199   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-071051 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m20.022740865s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (80.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-071051 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-071051 addons enable ingress --alsologtostderr -v=5: (17.458605022s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-071051 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (43.88s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-071051 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-071051 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.688544573s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-071051 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-071051 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [73ec582a-669d-436b-b7eb-6f9aa31f6d3e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [73ec582a-669d-436b-b7eb-6f9aa31f6d3e] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.018679728s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-071051 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-071051 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-071051 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.50.253
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-071051 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-071051 addons disable ingress-dns --alsologtostderr -v=1: (11.44290011s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-071051 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-071051 addons disable ingress --alsologtostderr -v=1: (7.504668113s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (43.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (106.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-813450 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1212 00:24:24.660547   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:24:38.390145   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:24:38.395429   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:24:38.405662   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:24:38.425952   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:24:38.466306   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:24:38.546680   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:24:38.707139   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:24:39.027756   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:24:39.668730   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:24:40.949141   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:24:43.509560   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:24:48.630387   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:24:58.871567   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:25:19.352274   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-813450 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m46.470444079s)
--- PASS: TestJSONOutput/start/Command (106.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-813450 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-813450 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-813450 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-813450 --output=json --user=testUser: (13.115760582s)
--- PASS: TestJSONOutput/stop/Command (13.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-883467 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-883467 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.203377ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"65fe2b02-a698-40e0-90ea-efbbb057d99e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-883467] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dc567fc9-da79-4bbf-adcd-f0968cfd66fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17764"}}
	{"specversion":"1.0","id":"f329323f-7d45-485f-9b11-18ebca8903fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fac43545-d9cb-4395-84f0-f9551df7760c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig"}}
	{"specversion":"1.0","id":"9e62d3b3-4389-49af-8301-aee0c9f4b201","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube"}}
	{"specversion":"1.0","id":"16ea52ab-50d9-4fe9-9756-c1bd52103dce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e01b3c27-5d8e-4f2f-89fc-96b24dc4c30e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f4f894a0-d065-4a57-aa37-11ac495dc41f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-883467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-883467
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (105.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-373770 --driver=kvm2 
E1212 00:26:00.312995   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-373770 --driver=kvm2 : (53.700290259s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-376226 --driver=kvm2 
E1212 00:27:22.233285   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-376226 --driver=kvm2 : (48.991538918s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-373770
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-376226
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-376226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-376226
helpers_test.go:175: Cleaning up "first-373770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-373770
--- PASS: TestMinikubeProfile (105.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-401182 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1212 00:28:12.802578   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:28:12.807880   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:28:12.818127   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:28:12.838408   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:28:12.878720   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:28:12.959069   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:28:13.119549   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-401182 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.722522519s)
E1212 00:28:13.440352   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:28:14.081325   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountFirst (29.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-401182 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-401182 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-422735 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1212 00:28:15.361602   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:28:17.922482   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:28:23.043098   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:28:33.283946   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-422735 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.333026732s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-422735 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-422735 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-401182 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-422735 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-422735 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-422735
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-422735: (2.222600735s)
--- PASS: TestMountStart/serial/Stop (2.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-422735
E1212 00:28:53.764389   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:28:56.974356   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-422735: (23.003171575s)
--- PASS: TestMountStart/serial/RestartStopped (24.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-422735 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-422735 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (130.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-859606 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1212 00:29:34.726540   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:29:38.389855   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:30:06.073851   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:30:56.647958   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-859606 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m10.130297469s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (130.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-859606 -- rollout status deployment/busybox: (3.026306558s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- exec busybox-5bc68d56bd-8rtcm -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- exec busybox-5bc68d56bd-lr9gw -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- exec busybox-5bc68d56bd-8rtcm -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- exec busybox-5bc68d56bd-lr9gw -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- exec busybox-5bc68d56bd-8rtcm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- exec busybox-5bc68d56bd-lr9gw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- exec busybox-5bc68d56bd-8rtcm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- exec busybox-5bc68d56bd-8rtcm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- exec busybox-5bc68d56bd-lr9gw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-859606 -- exec busybox-5bc68d56bd-lr9gw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-859606 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-859606 -v 3 --alsologtostderr: (45.321158535s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.93s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-859606 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 cp testdata/cp-test.txt multinode-859606:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 cp multinode-859606:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1229349573/001/cp-test_multinode-859606.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 cp multinode-859606:/home/docker/cp-test.txt multinode-859606-m02:/home/docker/cp-test_multinode-859606_multinode-859606-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606-m02 "sudo cat /home/docker/cp-test_multinode-859606_multinode-859606-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 cp multinode-859606:/home/docker/cp-test.txt multinode-859606-m03:/home/docker/cp-test_multinode-859606_multinode-859606-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606-m03 "sudo cat /home/docker/cp-test_multinode-859606_multinode-859606-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 cp testdata/cp-test.txt multinode-859606-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 cp multinode-859606-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1229349573/001/cp-test_multinode-859606-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 cp multinode-859606-m02:/home/docker/cp-test.txt multinode-859606:/home/docker/cp-test_multinode-859606-m02_multinode-859606.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606 "sudo cat /home/docker/cp-test_multinode-859606-m02_multinode-859606.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 cp multinode-859606-m02:/home/docker/cp-test.txt multinode-859606-m03:/home/docker/cp-test_multinode-859606-m02_multinode-859606-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606-m03 "sudo cat /home/docker/cp-test_multinode-859606-m02_multinode-859606-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 cp testdata/cp-test.txt multinode-859606-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 cp multinode-859606-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1229349573/001/cp-test_multinode-859606-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 cp multinode-859606-m03:/home/docker/cp-test.txt multinode-859606:/home/docker/cp-test_multinode-859606-m03_multinode-859606.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606 "sudo cat /home/docker/cp-test_multinode-859606-m03_multinode-859606.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 cp multinode-859606-m03:/home/docker/cp-test.txt multinode-859606-m02:/home/docker/cp-test_multinode-859606-m03_multinode-859606-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 ssh -n multinode-859606-m02 "sudo cat /home/docker/cp-test_multinode-859606-m03_multinode-859606-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-859606 node stop m03: (3.09486002s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-859606 status: exit status 7 (441.309795ms)

                                                
                                                
-- stdout --
	multinode-859606
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-859606-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-859606-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-859606 status --alsologtostderr: exit status 7 (453.179848ms)

                                                
                                                
-- stdout --
	multinode-859606
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-859606-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-859606-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:32:29.273678  103061 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:32:29.273971  103061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:32:29.273986  103061 out.go:309] Setting ErrFile to fd 2...
	I1212 00:32:29.273994  103061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:32:29.274252  103061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
	I1212 00:32:29.274434  103061 out.go:303] Setting JSON to false
	I1212 00:32:29.274463  103061 mustload.go:65] Loading cluster: multinode-859606
	I1212 00:32:29.274566  103061 notify.go:220] Checking for updates...
	I1212 00:32:29.274910  103061 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:32:29.274933  103061 status.go:255] checking status of multinode-859606 ...
	I1212 00:32:29.275458  103061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:32:29.275505  103061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:32:29.297817  103061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I1212 00:32:29.298274  103061 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:32:29.298794  103061 main.go:141] libmachine: Using API Version  1
	I1212 00:32:29.298810  103061 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:32:29.299206  103061 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:32:29.299390  103061 main.go:141] libmachine: (multinode-859606) Calling .GetState
	I1212 00:32:29.301054  103061 status.go:330] multinode-859606 host status = "Running" (err=<nil>)
	I1212 00:32:29.301072  103061 host.go:66] Checking if "multinode-859606" exists ...
	I1212 00:32:29.301411  103061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:32:29.301465  103061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:32:29.315781  103061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42443
	I1212 00:32:29.316196  103061 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:32:29.316633  103061 main.go:141] libmachine: Using API Version  1
	I1212 00:32:29.316655  103061 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:32:29.317012  103061 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:32:29.317187  103061 main.go:141] libmachine: (multinode-859606) Calling .GetIP
	I1212 00:32:29.319463  103061 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:32:29.319873  103061 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:29:30 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:32:29.319902  103061 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:32:29.320037  103061 host.go:66] Checking if "multinode-859606" exists ...
	I1212 00:32:29.320314  103061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:32:29.320353  103061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:32:29.335419  103061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I1212 00:32:29.335810  103061 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:32:29.336229  103061 main.go:141] libmachine: Using API Version  1
	I1212 00:32:29.336252  103061 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:32:29.336556  103061 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:32:29.336733  103061 main.go:141] libmachine: (multinode-859606) Calling .DriverName
	I1212 00:32:29.336917  103061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:32:29.336950  103061 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
	I1212 00:32:29.339331  103061 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:32:29.339717  103061 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:29:30 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
	I1212 00:32:29.339742  103061 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
	I1212 00:32:29.339909  103061 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
	I1212 00:32:29.340072  103061 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
	I1212 00:32:29.340214  103061 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
	I1212 00:32:29.340343  103061 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
	I1212 00:32:29.427138  103061 ssh_runner.go:195] Run: systemctl --version
	I1212 00:32:29.432803  103061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:32:29.446031  103061 kubeconfig.go:92] found "multinode-859606" server: "https://192.168.39.40:8443"
	I1212 00:32:29.446060  103061 api_server.go:166] Checking apiserver status ...
	I1212 00:32:29.446102  103061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:32:29.457857  103061 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1896/cgroup
	I1212 00:32:29.468180  103061 api_server.go:182] apiserver freezer: "6:freezer:/kubepods/burstable/pod6579d881f0553848179768317ac84853/7db8deb95763f3ec3a11101bbd293f4ef78d3e1f502f5423bf8dfc35b94796a3"
	I1212 00:32:29.468244  103061 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod6579d881f0553848179768317ac84853/7db8deb95763f3ec3a11101bbd293f4ef78d3e1f502f5423bf8dfc35b94796a3/freezer.state
	I1212 00:32:29.479115  103061 api_server.go:204] freezer state: "THAWED"
	I1212 00:32:29.479145  103061 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I1212 00:32:29.485722  103061 api_server.go:279] https://192.168.39.40:8443/healthz returned 200:
	ok
	I1212 00:32:29.485746  103061 status.go:421] multinode-859606 apiserver status = Running (err=<nil>)
	I1212 00:32:29.485762  103061 status.go:257] multinode-859606 status: &{Name:multinode-859606 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:32:29.485782  103061 status.go:255] checking status of multinode-859606-m02 ...
	I1212 00:32:29.486100  103061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:32:29.486150  103061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:32:29.500465  103061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44621
	I1212 00:32:29.500843  103061 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:32:29.501297  103061 main.go:141] libmachine: Using API Version  1
	I1212 00:32:29.501327  103061 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:32:29.501687  103061 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:32:29.501864  103061 main.go:141] libmachine: (multinode-859606-m02) Calling .GetState
	I1212 00:32:29.503436  103061 status.go:330] multinode-859606-m02 host status = "Running" (err=<nil>)
	I1212 00:32:29.503452  103061 host.go:66] Checking if "multinode-859606-m02" exists ...
	I1212 00:32:29.503728  103061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:32:29.503761  103061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:32:29.517704  103061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I1212 00:32:29.518126  103061 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:32:29.518567  103061 main.go:141] libmachine: Using API Version  1
	I1212 00:32:29.518589  103061 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:32:29.518928  103061 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:32:29.519131  103061 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
	I1212 00:32:29.521489  103061 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:32:29.521895  103061 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:30:48 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:32:29.521921  103061 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:32:29.522053  103061 host.go:66] Checking if "multinode-859606-m02" exists ...
	I1212 00:32:29.522381  103061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:32:29.522421  103061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:32:29.536748  103061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I1212 00:32:29.537091  103061 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:32:29.537567  103061 main.go:141] libmachine: Using API Version  1
	I1212 00:32:29.537590  103061 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:32:29.537893  103061 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:32:29.538082  103061 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
	I1212 00:32:29.538252  103061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:32:29.538279  103061 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
	I1212 00:32:29.540968  103061 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:32:29.541327  103061 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:30:48 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
	I1212 00:32:29.541361  103061 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
	I1212 00:32:29.541501  103061 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
	I1212 00:32:29.541704  103061 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
	I1212 00:32:29.541856  103061 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
	I1212 00:32:29.542005  103061 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
	I1212 00:32:29.635354  103061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:32:29.649195  103061 status.go:257] multinode-859606-m02 status: &{Name:multinode-859606-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:32:29.649233  103061 status.go:255] checking status of multinode-859606-m03 ...
	I1212 00:32:29.649614  103061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:32:29.649662  103061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:32:29.664311  103061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I1212 00:32:29.664807  103061 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:32:29.665302  103061 main.go:141] libmachine: Using API Version  1
	I1212 00:32:29.665324  103061 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:32:29.665657  103061 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:32:29.665831  103061 main.go:141] libmachine: (multinode-859606-m03) Calling .GetState
	I1212 00:32:29.667403  103061 status.go:330] multinode-859606-m03 host status = "Stopped" (err=<nil>)
	I1212 00:32:29.667416  103061 status.go:343] host is not running, skipping remaining checks
	I1212 00:32:29.667421  103061 status.go:257] multinode-859606-m03 status: &{Name:multinode-859606-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.99s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-859606 node start m03 --alsologtostderr: (30.704972917s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (171.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-859606
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-859606
E1212 00:33:12.802682   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-859606: (28.511871619s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-859606 --wait=true -v=8 --alsologtostderr
E1212 00:33:40.489150   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:33:56.973810   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:34:38.390744   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:35:20.021652   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-859606 --wait=true -v=8 --alsologtostderr: (2m22.390551227s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-859606
--- PASS: TestMultiNode/serial/RestartKeepsNodes (171.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-859606 node delete m03: (1.218616063s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-859606 stop: (25.483865547s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-859606 status: exit status 7 (100.257015ms)

                                                
                                                
-- stdout --
	multinode-859606
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-859606-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-859606 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-859606 status --alsologtostderr: exit status 7 (94.213522ms)

                                                
                                                
-- stdout --
	multinode-859606
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-859606-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:36:19.472325  104506 out.go:296] Setting OutFile to fd 1 ...
	I1212 00:36:19.472464  104506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:36:19.472473  104506 out.go:309] Setting ErrFile to fd 2...
	I1212 00:36:19.472478  104506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 00:36:19.472645  104506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
	I1212 00:36:19.472828  104506 out.go:303] Setting JSON to false
	I1212 00:36:19.472859  104506 mustload.go:65] Loading cluster: multinode-859606
	I1212 00:36:19.472895  104506 notify.go:220] Checking for updates...
	I1212 00:36:19.473227  104506 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 00:36:19.473239  104506 status.go:255] checking status of multinode-859606 ...
	I1212 00:36:19.473644  104506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:36:19.473707  104506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:36:19.487953  104506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39001
	I1212 00:36:19.488349  104506 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:36:19.488918  104506 main.go:141] libmachine: Using API Version  1
	I1212 00:36:19.488948  104506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:36:19.489317  104506 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:36:19.489517  104506 main.go:141] libmachine: (multinode-859606) Calling .GetState
	I1212 00:36:19.491248  104506 status.go:330] multinode-859606 host status = "Stopped" (err=<nil>)
	I1212 00:36:19.491260  104506 status.go:343] host is not running, skipping remaining checks
	I1212 00:36:19.491265  104506 status.go:257] multinode-859606 status: &{Name:multinode-859606 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:36:19.491297  104506 status.go:255] checking status of multinode-859606-m02 ...
	I1212 00:36:19.491592  104506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1212 00:36:19.491626  104506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:36:19.505410  104506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42247
	I1212 00:36:19.505785  104506 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:36:19.506182  104506 main.go:141] libmachine: Using API Version  1
	I1212 00:36:19.506202  104506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:36:19.506565  104506 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:36:19.506748  104506 main.go:141] libmachine: (multinode-859606-m02) Calling .GetState
	I1212 00:36:19.508501  104506 status.go:330] multinode-859606-m02 host status = "Stopped" (err=<nil>)
	I1212 00:36:19.508527  104506 status.go:343] host is not running, skipping remaining checks
	I1212 00:36:19.508532  104506 status.go:257] multinode-859606-m02 status: &{Name:multinode-859606-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-859606
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-859606-m02 --driver=kvm2 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-859606-m02 --driver=kvm2 : exit status 14 (80.969075ms)

                                                
                                                
-- stdout --
	* [multinode-859606-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-859606-m02' is duplicated with machine name 'multinode-859606-m02' in profile 'multinode-859606'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-859606-m03 --driver=kvm2 
E1212 00:38:12.802738   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-859606-m03 --driver=kvm2 : (51.261760011s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-859606
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-859606: exit status 80 (238.601695ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-859606
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-859606-m03 already exists in multinode-859606-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-859606-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-859606-m03: (1.014917075s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.65s)

                                                
                                    
x
+
TestPreload (181.46s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-974743 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1212 00:38:56.974533   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:39:38.390371   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-974743 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m38.436937432s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-974743 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-974743 image pull gcr.io/k8s-minikube/busybox: (1.339314429s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-974743
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-974743: (13.109299967s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-974743 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1212 00:41:01.434103   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-974743 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m7.291677627s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-974743 image list
helpers_test.go:175: Cleaning up "test-preload-974743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-974743
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-974743: (1.065025794s)
--- PASS: TestPreload (181.46s)

                                                
                                    
x
+
TestScheduledStopUnix (122.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-052275 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-052275 --memory=2048 --driver=kvm2 : (51.161522392s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-052275 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-052275 -n scheduled-stop-052275
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-052275 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-052275 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-052275 -n scheduled-stop-052275
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-052275
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-052275 --schedule 15s
E1212 00:43:12.802689   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-052275
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-052275: exit status 7 (75.59915ms)

                                                
                                                
-- stdout --
	scheduled-stop-052275
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-052275 -n scheduled-stop-052275
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-052275 -n scheduled-stop-052275: exit status 7 (74.962414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-052275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-052275
--- PASS: TestScheduledStopUnix (122.94s)

                                                
                                    
x
+
TestSkaffold (142.1s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe315147632 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-261002 --memory=2600 --driver=kvm2 
E1212 00:43:56.975409   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 00:44:35.850030   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 00:44:38.390630   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-261002 --memory=2600 --driver=kvm2 : (52.552942112s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe315147632 run --minikube-profile skaffold-261002 --kube-context skaffold-261002 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe315147632 run --minikube-profile skaffold-261002 --kube-context skaffold-261002 --status-check=true --port-forward=false --interactive=false: (1m17.381478219s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5cccbb9b59-mq8c4" [882a2032-a811-48ee-8b08-bc96ddcbff93] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.019921577s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5f69bbf444-6rwq8" [4ab36923-e9a7-4bf6-97e4-7ca902d148d0] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.012678425s
helpers_test.go:175: Cleaning up "skaffold-261002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-261002
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-261002: (1.196900904s)
--- PASS: TestSkaffold (142.10s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (171.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.3321020381.exe start -p running-upgrade-084969 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.3321020381.exe start -p running-upgrade-084969 --memory=2200 --vm-driver=kvm2 : (1m38.605260768s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-084969 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-084969 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m10.85277021s)
helpers_test.go:175: Cleaning up "running-upgrade-084969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-084969
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-084969: (1.263704914s)
--- PASS: TestRunningBinaryUpgrade (171.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (272.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-384376 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-384376 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (2m13.7421054s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-384376
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-384376: (4.363392961s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-384376 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-384376 status --format={{.Host}}: exit status 7 (93.217834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-384376 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-384376 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 : (54.184860879s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-384376 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-384376 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-384376 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (97.56681ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-384376] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-384376
	    minikube start -p kubernetes-upgrade-384376 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3843762 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-384376 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-384376 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-384376 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 : (1m18.627818124s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-384376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-384376
--- PASS: TestKubernetesUpgrade (272.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (212.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.297307286.exe start -p stopped-upgrade-264230 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.297307286.exe start -p stopped-upgrade-264230 --memory=2200 --vm-driver=kvm2 : (1m43.116156061s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.297307286.exe -p stopped-upgrade-264230 stop
E1212 00:48:56.974284   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.297307286.exe -p stopped-upgrade-264230 stop: (13.13421753s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-264230 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-264230 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m35.922468138s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (212.18s)

                                                
                                    
x
+
TestPause/serial/Start (105.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-592209 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-592209 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m45.073592504s)
--- PASS: TestPause/serial/Start (105.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-264230
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-264230: (1.35167723s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-173403 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-173403 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (80.61053ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-173403] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-173403 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-173403 --driver=kvm2 : (1m25.273864187s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-173403 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.70s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (80.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-592209 --alsologtostderr -v=1 --driver=kvm2 
E1212 00:52:00.022657   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-592209 --alsologtostderr -v=1 --driver=kvm2 : (1m20.391978337s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (80.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (35.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-173403 --no-kubernetes --driver=kvm2 
E1212 00:52:19.777791   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-173403 --no-kubernetes --driver=kvm2 : (33.868283697s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-173403 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-173403 status -o json: exit status 2 (260.685116ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-173403","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-173403
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-173403: (1.042486195s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (35.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-173403 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-173403 --no-kubernetes --driver=kvm2 : (30.942748518s)
--- PASS: TestNoKubernetes/serial/Start (30.94s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-592209 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-592209 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-592209 --output=json --layout=cluster: exit status 2 (278.558578ms)

                                                
                                                
-- stdout --
	{"Name":"pause-592209","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-592209","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-592209 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-592209 --alsologtostderr -v=5
E1212 00:53:12.803006   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-592209 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-592209 --alsologtostderr -v=5: (1.119491134s)
--- PASS: TestPause/serial/DeletePaused (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-173403 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-173403 "sudo systemctl is-active --quiet service kubelet": exit status 1 (242.772348ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (95.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m35.289079183s)
--- PASS: TestNetworkPlugins/group/auto/Start (95.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-173403
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-173403: (2.245446093s)
--- PASS: TestNoKubernetes/serial/Stop (2.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (74.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-173403 --driver=kvm2 
E1212 00:53:41.697981   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 00:53:56.974441   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-173403 --driver=kvm2 : (1m14.572600272s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (74.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (96.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E1212 00:54:29.732056   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
E1212 00:54:29.737341   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
E1212 00:54:29.747724   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
E1212 00:54:29.767973   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
E1212 00:54:29.808259   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
E1212 00:54:29.888626   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
E1212 00:54:30.049665   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
E1212 00:54:30.370804   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
E1212 00:54:31.011999   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
E1212 00:54:32.293105   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m36.479192296s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (96.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-173403 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-173403 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.951339ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (130.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E1212 00:54:34.853701   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
E1212 00:54:38.390218   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:54:39.974318   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m10.653678899s)
--- PASS: TestNetworkPlugins/group/calico/Start (130.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-826505 "pgrep -a kubelet"
E1212 00:54:50.214907   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-826505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jv7sx" [6b34eb58-4935-45a2-be9f-1483c9f5a110] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jv7sx" [6b34eb58-4935-45a2-be9f-1483c9f5a110] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.015408259s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-826505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (88.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m28.097371376s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (88.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-d2gq9" [23b1c9c7-e080-48b0-9abc-a5f2280dd292] Running
E1212 00:55:57.855421   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.031807975s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-826505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-826505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kindnet-826505 replace --force -f testdata/netcat-deployment.yaml: (1.699491307s)
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-njg4j" [97c47ab8-272d-4d15-a39b-cbd9c2a3da02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-njg4j" [97c47ab8-272d-4d15-a39b-cbd9c2a3da02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.012576007s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-826505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (82.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1212 00:56:25.538491   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m22.628598931s)
--- PASS: TestNetworkPlugins/group/false/Start (82.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (132.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m12.088677417s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (132.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-sj4n6" [7f294ef8-5571-4bc6-8a1c-334ba9511041] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.023278699s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-826505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-826505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vqcn7" [0c66b572-7dea-4170-b4fe-f5e60ce3992a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vqcn7" [0c66b572-7dea-4170-b4fe-f5e60ce3992a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.014038247s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-826505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-826505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9fqnw" [5b04b187-d3da-430d-bb9a-280ace310d54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9fqnw" [5b04b187-d3da-430d-bb9a-280ace310d54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.022268473s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-826505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-826505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (90.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m30.606187645s)
--- PASS: TestNetworkPlugins/group/flannel/Start (90.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-826505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-826505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z9nmx" [62e4c633-ac2c-4486-aef8-1591ad79c7d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z9nmx" [62e4c633-ac2c-4486-aef8-1591ad79c7d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.015533457s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-826505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (110.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-826505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m50.224592609s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (110.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (146.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-190513 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-190513 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m26.803843854s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (146.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-826505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-826505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m6rnt" [fb8e5723-9e1d-4acc-a22d-68435a71766e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-m6rnt" [fb8e5723-9e1d-4acc-a22d-68435a71766e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.012140065s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-q8cnl" [9e3270ce-e752-48f8-8042-87ca940a79cf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.023641885s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-826505 "pgrep -a kubelet"
E1212 00:58:56.974447   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-826505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gh6n6" [cd310598-7451-4847-b22b-332acfbb7daa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gh6n6" [cd310598-7451-4847-b22b-332acfbb7daa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.016505789s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-826505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-826505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (101.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-880873 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
E1212 00:59:29.735941   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-880873 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (1m41.160629707s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (101.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (101.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-362841 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4
E1212 00:59:38.390324   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 00:59:50.651717   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 00:59:50.656973   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 00:59:50.667209   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 00:59:50.687472   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 00:59:50.728295   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 00:59:50.808679   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 00:59:50.969105   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 00:59:51.289652   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 00:59:51.929893   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 00:59:53.210295   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 00:59:55.770670   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 00:59:57.417976   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
E1212 01:00:00.891898   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-362841 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4: (1m41.27978467s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (101.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-826505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-826505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g6qqb" [3092aff8-7060-48bc-92a4-f32ec71d0a2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:00:11.132767   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-g6qqb" [3092aff8-7060-48bc-92a4-f32ec71d0a2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.013768443s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-826505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-826505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)
E1212 01:07:46.059246   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:07:52.276831   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:08:12.802976   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 01:08:13.745918   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-375907 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4
E1212 01:00:53.371421   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:00:53.376798   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:00:53.387069   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:00:53.407384   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:00:53.447727   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:00:53.528476   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:00:53.688932   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:00:54.009257   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:00:54.650383   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:00:55.931116   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:00:57.855232   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 01:00:58.492109   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-375907 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4: (1m16.917081133s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-880873 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a7c76142-6933-4f1a-8ff3-a41946546ad0] Pending
helpers_test.go:344: "busybox" [a7c76142-6933-4f1a-8ff3-a41946546ad0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 01:01:03.612659   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
helpers_test.go:344: "busybox" [a7c76142-6933-4f1a-8ff3-a41946546ad0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.029607799s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-880873 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-190513 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f8d50819-d362-4e08-a7fd-1889751f25a4] Pending
helpers_test.go:344: "busybox" [f8d50819-d362-4e08-a7fd-1889751f25a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f8d50819-d362-4e08-a7fd-1889751f25a4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.03697878s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-190513 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-880873 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-880873 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058999597s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-880873 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-880873 --alsologtostderr -v=3
E1212 01:01:12.574483   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-880873 --alsologtostderr -v=3: (13.155850935s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-362841 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b9a9fb32-e63d-4da2-a1c4-3bbb4cf10b50] Pending
helpers_test.go:344: "busybox" [b9a9fb32-e63d-4da2-a1c4-3bbb4cf10b50] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 01:01:13.853827   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
helpers_test.go:344: "busybox" [b9a9fb32-e63d-4da2-a1c4-3bbb4cf10b50] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.062721685s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-362841 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-190513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1212 01:01:15.851021   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-190513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.532206255s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-190513 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-190513 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-190513 --alsologtostderr -v=3: (13.160699948s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-362841 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-362841 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.15835867s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-362841 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-880873 -n no-preload-880873
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-880873 -n no-preload-880873: exit status 7 (84.764754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-880873 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (338.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-880873 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-880873 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (5m37.995734794s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-880873 -n no-preload-880873
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (338.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-362841 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-362841 --alsologtostderr -v=3: (13.136084428s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190513 -n old-k8s-version-190513
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190513 -n old-k8s-version-190513: exit status 7 (95.73346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-190513 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (478.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-190513 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1212 01:01:34.334550   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-190513 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m57.936539216s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190513 -n old-k8s-version-190513
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (478.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-362841 -n embed-certs-362841
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-362841 -n embed-certs-362841: exit status 7 (96.102807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-362841 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (347.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-362841 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4
E1212 01:01:44.353794   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:01:44.359155   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:01:44.369425   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:01:44.389758   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:01:44.430060   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:01:44.510381   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:01:44.670880   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:01:44.991267   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:01:45.632181   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:01:46.912420   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:01:49.092397   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:01:49.097735   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:01:49.108002   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:01:49.128366   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:01:49.168706   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:01:49.249307   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:01:49.410425   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:01:49.472704   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:01:49.730963   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:01:50.371904   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:01:51.653081   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:01:54.213357   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:01:54.593136   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-362841 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4: (5m47.001068549s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-362841 -n embed-certs-362841
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (347.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-375907 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a8d57be3-21e3-4958-a62e-fcb8d9219e3a] Pending
helpers_test.go:344: "busybox" [a8d57be3-21e3-4958-a62e-fcb8d9219e3a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 01:01:59.334530   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
helpers_test.go:344: "busybox" [a8d57be3-21e3-4958-a62e-fcb8d9219e3a] Running
E1212 01:02:04.833897   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.028895802s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-375907 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-375907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1212 01:02:09.575650   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-375907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.022212502s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-375907 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-375907 --alsologtostderr -v=3
E1212 01:02:15.295079   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-375907 --alsologtostderr -v=3: (13.130097611s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-375907 -n default-k8s-diff-port-375907
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-375907 -n default-k8s-diff-port-375907: exit status 7 (92.109467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-375907 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (355.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-375907 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4
E1212 01:02:25.314662   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:02:30.055959   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:02:34.494636   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 01:02:46.059637   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:02:46.064975   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:02:46.075336   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:02:46.095651   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:02:46.136002   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:02:46.216356   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:02:46.377401   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:02:46.698097   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:02:47.339043   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:02:48.620254   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:02:51.180763   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:02:56.301446   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:03:06.275666   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:03:06.542416   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:03:11.016987   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:03:12.802499   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/ingress-addon-legacy-071051/client.crt: no such file or directory
E1212 01:03:27.023061   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:03:37.215272   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:03:47.352702   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:03:47.358035   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:03:47.368339   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:03:47.388662   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:03:47.428979   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:03:47.509353   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:03:47.670326   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:03:47.990749   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:03:48.630921   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:03:49.911413   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:03:51.498347   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:03:51.503643   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:03:51.514047   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:03:51.534397   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:03:51.574749   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:03:51.655066   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:03:51.815347   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:03:52.135700   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:03:52.472323   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:03:52.776494   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:03:54.056727   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:03:56.617029   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:03:56.974587   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 01:03:57.593306   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:04:01.737968   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:04:07.833643   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:04:07.984162   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:04:11.978254   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:04:28.196158   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:04:28.314419   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:04:29.732277   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
E1212 01:04:32.459254   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:04:32.937984   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
E1212 01:04:38.390738   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
E1212 01:04:50.652115   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 01:05:08.432976   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:08.438276   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:08.448545   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:08.468790   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:08.509078   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:08.589406   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:08.750062   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:09.070673   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:09.275111   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:05:09.711205   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:10.991450   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:13.420048   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:05:13.552376   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:18.335097   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/auto-826505/client.crt: no such file or directory
E1212 01:05:18.673443   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:28.914586   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:29.905308   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/false-826505/client.crt: no such file or directory
E1212 01:05:49.395696   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:05:53.371251   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:05:57.855186   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
E1212 01:06:21.056320   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kindnet-826505/client.crt: no such file or directory
E1212 01:06:30.356388   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/kubenet-826505/client.crt: no such file or directory
E1212 01:06:31.195502   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:06:35.341128   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:06:44.352863   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
E1212 01:06:49.092708   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/custom-flannel-826505/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-375907 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4: (5m55.431947254s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-375907 -n default-k8s-diff-port-375907
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (355.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mc25f" [bb4aaffa-5a7c-4fa1-9c96-c38a4943219d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021369782s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mc25f" [bb4aaffa-5a7c-4fa1-9c96-c38a4943219d] Running
E1212 01:07:12.036845   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/calico-826505/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013641572s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-880873 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-880873 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-880873 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-880873 -n no-preload-880873
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-880873 -n no-preload-880873: exit status 2 (261.626639ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-880873 -n no-preload-880873
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-880873 -n no-preload-880873: exit status 2 (275.515701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-880873 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-880873 -n no-preload-880873
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-880873 -n no-preload-880873
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (78.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-221640 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
E1212 01:07:20.899406   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/skaffold-261002/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-221640 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (1m18.197649389s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (78.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lpb6c" [9687b4a1-3464-47f9-9e1a-4391f7ee09b2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026864813s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lpb6c" [9687b4a1-3464-47f9-9e1a-4391f7ee09b2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016985426s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-362841 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-362841 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-362841 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-362841 -n embed-certs-362841
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-362841 -n embed-certs-362841: exit status 2 (273.706317ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-362841 -n embed-certs-362841
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-362841 -n embed-certs-362841: exit status 2 (268.367242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-362841 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-362841 -n embed-certs-362841
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-362841 -n embed-certs-362841
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7zn5h" [317c0c20-cef0-4534-8436-b8e9529049d0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02471316s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7zn5h" [317c0c20-cef0-4534-8436-b8e9529049d0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015264022s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-375907 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-375907 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-375907 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-375907 -n default-k8s-diff-port-375907
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-375907 -n default-k8s-diff-port-375907: exit status 2 (260.683932ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-375907 -n default-k8s-diff-port-375907
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-375907 -n default-k8s-diff-port-375907: exit status 2 (256.226308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-375907 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-375907 -n default-k8s-diff-port-375907
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-375907 -n default-k8s-diff-port-375907
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-221640 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-221640 --alsologtostderr -v=3
E1212 01:08:40.023522   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-221640 --alsologtostderr -v=3: (8.12342984s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-221640 -n newest-cni-221640
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-221640 -n newest-cni-221640: exit status 7 (81.656128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-221640 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (48.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-221640 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
E1212 01:08:47.352677   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:08:51.498977   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
E1212 01:08:56.973608   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/addons-018377/client.crt: no such file or directory
E1212 01:09:15.036023   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/enable-default-cni-826505/client.crt: no such file or directory
E1212 01:09:19.181653   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/flannel-826505/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-221640 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (48.020121389s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-221640 -n newest-cni-221640
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (48.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-tskjj" [21f9be0b-9a6a-48a6-b336-50a72440da54] Running
E1212 01:09:29.732470   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/gvisor-583949/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019912503s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-221640 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-221640 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-221640 -n newest-cni-221640
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-221640 -n newest-cni-221640: exit status 2 (253.987208ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-221640 -n newest-cni-221640
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-221640 -n newest-cni-221640: exit status 2 (270.300242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-221640 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-221640 -n newest-cni-221640
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-221640 -n newest-cni-221640
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-tskjj" [21f9be0b-9a6a-48a6-b336-50a72440da54] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01392291s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-190513 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-190513 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-190513 --alsologtostderr -v=1
E1212 01:09:38.390396   87609 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/functional-289946/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190513 -n old-k8s-version-190513
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190513 -n old-k8s-version-190513: exit status 2 (264.741815ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-190513 -n old-k8s-version-190513
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-190513 -n old-k8s-version-190513: exit status 2 (254.594102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-190513 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190513 -n old-k8s-version-190513
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-190513 -n old-k8s-version-190513
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.45s)

                                                
                                    

Test skip (34/323)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
109 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
166 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
199 TestKicCustomNetwork 0
200 TestKicExistingNetwork 0
201 TestKicCustomSubnet 0
202 TestKicStaticIP 0
234 TestChangeNoneUser 0
237 TestScheduledStopWindows 0
241 TestInsufficientStorage 0
245 TestMissingContainerUpgrade 0
256 TestNetworkPlugins/group/cilium 4.24
267 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-826505 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-826505" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-826505

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-826505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826505"

                                                
                                                
----------------------- debugLogs end: cilium-826505 [took: 4.035583798s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-826505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-826505
--- SKIP: TestNetworkPlugins/group/cilium (4.24s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-468491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-468491
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard