Test Report: KVM_Linux 17486

                    
                      90bfaeb6484f3951039c439350045b001b754599:2023-11-01:31693
                    
                

Test fail (2/321)

Order failed test Duration
222 TestMultiNode/serial/RestartMultiNode 83.62
365 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 3.41
x
+
TestMultiNode/serial/RestartMultiNode (83.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-391061 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-391061 --wait=true -v=8 --alsologtostderr --driver=kvm2 : exit status 90 (1m21.256349954s)

                                                
                                                
-- stdout --
	* [multinode-391061] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node multinode-391061 in cluster multinode-391061
	* Restarting existing kvm2 VM for "multinode-391061" ...
	* Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-391061-m02 in cluster multinode-391061
	* Restarting existing kvm2 VM for "multinode-391061-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.43
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:08:49.696747   30593 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:08:49.696976   30593 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:08:49.696984   30593 out.go:309] Setting ErrFile to fd 2...
	I1101 00:08:49.696989   30593 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:08:49.697199   30593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
	I1101 00:08:49.697724   30593 out.go:303] Setting JSON to false
	I1101 00:08:49.698581   30593 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3079,"bootTime":1698794251,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:08:49.698643   30593 start.go:138] virtualization: kvm guest
	I1101 00:08:49.701257   30593 out.go:177] * [multinode-391061] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:08:49.702839   30593 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:08:49.702844   30593 notify.go:220] Checking for updates...
	I1101 00:08:49.704612   30593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:08:49.706320   30593 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:08:49.707852   30593 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	I1101 00:08:49.709325   30593 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:08:49.710727   30593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:08:49.712746   30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:08:49.713116   30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:08:49.713162   30593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:49.727252   30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I1101 00:08:49.727584   30593 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:49.728056   30593 main.go:141] libmachine: Using API Version  1
	I1101 00:08:49.728075   30593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:49.728412   30593 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:49.728601   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:08:49.728809   30593 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:08:49.729119   30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:08:49.729158   30593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:49.742929   30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I1101 00:08:49.743302   30593 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:49.743756   30593 main.go:141] libmachine: Using API Version  1
	I1101 00:08:49.743779   30593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:49.744063   30593 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:49.744234   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:08:49.779391   30593 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:08:49.780999   30593 start.go:298] selected driver: kvm2
	I1101 00:08:49.781015   30593 start.go:902] validating driver "kvm2" against &{Name:multinode-391061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:multinode-391061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false k
ubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMn
etPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:08:49.781172   30593 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:08:49.781470   30593 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:08:49.781541   30593 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7251/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:08:49.796518   30593 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:08:49.797197   30593 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:08:49.797254   30593 cni.go:84] Creating CNI manager for ""
	I1101 00:08:49.797263   30593 cni.go:136] 2 nodes found, recommending kindnet
	I1101 00:08:49.797274   30593 start_flags.go:323] config:
	{Name:multinode-391061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-391061 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:fals
e nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:08:49.797449   30593 iso.go:125] acquiring lock: {Name:mk56e0e42e3cb427bae1fd4521b75db693021ac1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:08:49.799445   30593 out.go:177] * Starting control plane node multinode-391061 in cluster multinode-391061
	I1101 00:08:49.802107   30593 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1101 00:08:49.802154   30593 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1101 00:08:49.802163   30593 cache.go:56] Caching tarball of preloaded images
	I1101 00:08:49.802239   30593 preload.go:174] Found /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 00:08:49.802251   30593 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1101 00:08:49.802383   30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
	I1101 00:08:49.802605   30593 start.go:365] acquiring machines lock for multinode-391061: {Name:mkd250049361a5d831a3d31c273569334737e54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:08:49.802660   30593 start.go:369] acquired machines lock for "multinode-391061" in 32.142µs
	I1101 00:08:49.802683   30593 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:08:49.802692   30593 fix.go:54] fixHost starting: 
	I1101 00:08:49.802950   30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:08:49.802988   30593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:49.817041   30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I1101 00:08:49.817426   30593 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:49.817852   30593 main.go:141] libmachine: Using API Version  1
	I1101 00:08:49.817876   30593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:49.818147   30593 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:49.818268   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:08:49.818364   30593 main.go:141] libmachine: (multinode-391061) Calling .GetState
	I1101 00:08:49.819780   30593 fix.go:102] recreateIfNeeded on multinode-391061: state=Stopped err=<nil>
	I1101 00:08:49.819798   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	W1101 00:08:49.819945   30593 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:08:49.822198   30593 out.go:177] * Restarting existing kvm2 VM for "multinode-391061" ...
	I1101 00:08:49.823675   30593 main.go:141] libmachine: (multinode-391061) Calling .Start
	I1101 00:08:49.823836   30593 main.go:141] libmachine: (multinode-391061) Ensuring networks are active...
	I1101 00:08:49.824527   30593 main.go:141] libmachine: (multinode-391061) Ensuring network default is active
	I1101 00:08:49.824903   30593 main.go:141] libmachine: (multinode-391061) Ensuring network mk-multinode-391061 is active
	I1101 00:08:49.825231   30593 main.go:141] libmachine: (multinode-391061) Getting domain xml...
	I1101 00:08:49.825825   30593 main.go:141] libmachine: (multinode-391061) Creating domain...
	I1101 00:08:51.072133   30593 main.go:141] libmachine: (multinode-391061) Waiting to get IP...
	I1101 00:08:51.072978   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:51.073561   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:51.073673   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.073534   30629 retry.go:31] will retry after 229.675258ms: waiting for machine to come up
	I1101 00:08:51.305068   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:51.305486   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:51.305513   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.305442   30629 retry.go:31] will retry after 372.862383ms: waiting for machine to come up
	I1101 00:08:51.680135   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:51.680628   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:51.680663   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.680610   30629 retry.go:31] will retry after 314.755115ms: waiting for machine to come up
	I1101 00:08:51.997095   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:51.997485   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:51.997516   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.997452   30629 retry.go:31] will retry after 376.70772ms: waiting for machine to come up
	I1101 00:08:52.376191   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:52.376728   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:52.376768   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:52.376689   30629 retry.go:31] will retry after 583.291159ms: waiting for machine to come up
	I1101 00:08:52.961471   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:52.961889   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:52.961920   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:52.961826   30629 retry.go:31] will retry after 803.566491ms: waiting for machine to come up
	I1101 00:08:53.766791   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:53.767211   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:53.767251   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:53.767153   30629 retry.go:31] will retry after 1.032833525s: waiting for machine to come up
	I1101 00:08:54.801328   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:54.801700   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:54.801734   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:54.801656   30629 retry.go:31] will retry after 1.044435025s: waiting for machine to come up
	I1101 00:08:55.847409   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:55.847850   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:55.847874   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:55.847797   30629 retry.go:31] will retry after 1.41464542s: waiting for machine to come up
	I1101 00:08:57.264298   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:57.264621   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:57.264658   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:57.264585   30629 retry.go:31] will retry after 1.783339985s: waiting for machine to come up
	I1101 00:08:59.050737   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:59.051258   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:59.051280   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:59.051209   30629 retry.go:31] will retry after 2.24727828s: waiting for machine to come up
	I1101 00:09:01.300675   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:01.301123   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:09:01.301147   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:09:01.301080   30629 retry.go:31] will retry after 2.659318668s: waiting for machine to come up
	I1101 00:09:03.964050   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:03.964412   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:09:03.964433   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:09:03.964369   30629 retry.go:31] will retry after 4.002549509s: waiting for machine to come up
	I1101 00:09:07.970570   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:07.970947   30593 main.go:141] libmachine: (multinode-391061) Found IP for machine: 192.168.39.43
	I1101 00:09:07.970973   30593 main.go:141] libmachine: (multinode-391061) Reserving static IP address...
	I1101 00:09:07.970988   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has current primary IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:07.971417   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "multinode-391061", mac: "52:54:00:b9:c2:69", ip: "192.168.39.43"} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:07.971446   30593 main.go:141] libmachine: (multinode-391061) DBG | skip adding static IP to network mk-multinode-391061 - found existing host DHCP lease matching {name: "multinode-391061", mac: "52:54:00:b9:c2:69", ip: "192.168.39.43"}
	I1101 00:09:07.971454   30593 main.go:141] libmachine: (multinode-391061) Reserved static IP address: 192.168.39.43
	I1101 00:09:07.971463   30593 main.go:141] libmachine: (multinode-391061) Waiting for SSH to be available...
	I1101 00:09:07.971472   30593 main.go:141] libmachine: (multinode-391061) DBG | Getting to WaitForSSH function...
	I1101 00:09:07.973244   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:07.973598   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:07.973629   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:07.973785   30593 main.go:141] libmachine: (multinode-391061) DBG | Using SSH client type: external
	I1101 00:09:07.973815   30593 main.go:141] libmachine: (multinode-391061) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa (-rw-------)
	I1101 00:09:07.973859   30593 main.go:141] libmachine: (multinode-391061) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 00:09:07.973884   30593 main.go:141] libmachine: (multinode-391061) DBG | About to run SSH command:
	I1101 00:09:07.973895   30593 main.go:141] libmachine: (multinode-391061) DBG | exit 0
	I1101 00:09:08.070105   30593 main.go:141] libmachine: (multinode-391061) DBG | SSH cmd err, output: <nil>: 
	I1101 00:09:08.070483   30593 main.go:141] libmachine: (multinode-391061) Calling .GetConfigRaw
	I1101 00:09:08.071216   30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
	I1101 00:09:08.073614   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.074025   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.074060   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.074285   30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
	I1101 00:09:08.074479   30593 machine.go:88] provisioning docker machine ...
	I1101 00:09:08.074512   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:08.074714   30593 main.go:141] libmachine: (multinode-391061) Calling .GetMachineName
	I1101 00:09:08.074856   30593 buildroot.go:166] provisioning hostname "multinode-391061"
	I1101 00:09:08.074870   30593 main.go:141] libmachine: (multinode-391061) Calling .GetMachineName
	I1101 00:09:08.074990   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.077098   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.077410   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.077452   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.077575   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:08.077739   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.077899   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.078007   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:08.078153   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:08.078494   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1101 00:09:08.078529   30593 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-391061 && echo "multinode-391061" | sudo tee /etc/hostname
	I1101 00:09:08.217944   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-391061
	
	I1101 00:09:08.217967   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.220671   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.220963   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.221024   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.221089   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:08.221295   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.221466   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.221616   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:08.221803   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:08.222253   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1101 00:09:08.222280   30593 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-391061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-391061/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-391061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:09:08.359049   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:09:08.359078   30593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7251/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7251/.minikube}
	I1101 00:09:08.359096   30593 buildroot.go:174] setting up certificates
	I1101 00:09:08.359104   30593 provision.go:83] configureAuth start
	I1101 00:09:08.359112   30593 main.go:141] libmachine: (multinode-391061) Calling .GetMachineName
	I1101 00:09:08.359381   30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
	I1101 00:09:08.361931   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.362234   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.362269   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.362374   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.364658   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.364936   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.364968   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.365105   30593 provision.go:138] copyHostCerts
	I1101 00:09:08.365133   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
	I1101 00:09:08.365172   30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem, removing ...
	I1101 00:09:08.365183   30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
	I1101 00:09:08.365248   30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem (1082 bytes)
	I1101 00:09:08.365344   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
	I1101 00:09:08.365365   30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem, removing ...
	I1101 00:09:08.365372   30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
	I1101 00:09:08.365399   30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem (1123 bytes)
	I1101 00:09:08.365452   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
	I1101 00:09:08.365467   30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem, removing ...
	I1101 00:09:08.365473   30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
	I1101 00:09:08.365494   30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem (1675 bytes)
	I1101 00:09:08.365549   30593 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem org=jenkins.multinode-391061 san=[192.168.39.43 192.168.39.43 localhost 127.0.0.1 minikube multinode-391061]
	I1101 00:09:08.497882   30593 provision.go:172] copyRemoteCerts
	I1101 00:09:08.497940   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:09:08.497965   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.500598   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.500931   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.500961   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.501176   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:08.501356   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.501513   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:08.501639   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
	I1101 00:09:08.594935   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 00:09:08.594993   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:09:08.617737   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 00:09:08.617835   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 00:09:08.639923   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 00:09:08.640003   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 00:09:08.662129   30593 provision.go:86] duration metric: configureAuth took 303.015088ms
	I1101 00:09:08.662155   30593 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:09:08.662403   30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:09:08.662426   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:08.662704   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.665367   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.665756   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.665781   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.665918   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:08.666128   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.666300   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.666449   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:08.666613   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:08.666928   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1101 00:09:08.666940   30593 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 00:09:08.795906   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1101 00:09:08.795936   30593 buildroot.go:70] root file system type: tmpfs
	I1101 00:09:08.796096   30593 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 00:09:08.796134   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.798879   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.799232   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.799265   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.799423   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:08.799598   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.799753   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.799868   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:08.800041   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:08.800361   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1101 00:09:08.800421   30593 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 00:09:08.942805   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 00:09:08.942844   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.945908   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.946293   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.946326   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.946513   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:08.946689   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.946882   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.947001   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:08.947184   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:08.947647   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1101 00:09:08.947681   30593 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 00:09:09.848694   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1101 00:09:09.848722   30593 machine.go:91] provisioned docker machine in 1.774228913s
	I1101 00:09:09.848735   30593 start.go:300] post-start starting for "multinode-391061" (driver="kvm2")
	I1101 00:09:09.848748   30593 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:09:09.848772   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:09.849087   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:09:09.849113   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:09.851810   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:09.852197   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:09.852243   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:09.852386   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:09.852556   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:09.852728   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:09.852822   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
	I1101 00:09:09.947639   30593 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:09:09.951509   30593 command_runner.go:130] > NAME=Buildroot
	I1101 00:09:09.951530   30593 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
	I1101 00:09:09.951535   30593 command_runner.go:130] > ID=buildroot
	I1101 00:09:09.951542   30593 command_runner.go:130] > VERSION_ID=2021.02.12
	I1101 00:09:09.951549   30593 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1101 00:09:09.951586   30593 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:09:09.951598   30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/addons for local assets ...
	I1101 00:09:09.951663   30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/files for local assets ...
	I1101 00:09:09.951768   30593 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> 144632.pem in /etc/ssl/certs
	I1101 00:09:09.951785   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> /etc/ssl/certs/144632.pem
	I1101 00:09:09.951898   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:09:09.959594   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:09:09.981962   30593 start.go:303] post-start completed in 133.213964ms
	I1101 00:09:09.982003   30593 fix.go:56] fixHost completed within 20.179294964s
	I1101 00:09:09.982027   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:09.984776   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:09.985223   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:09.985252   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:09.985386   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:09.985595   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:09.985729   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:09.985860   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:09.985979   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:09.986435   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1101 00:09:09.986451   30593 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1101 00:09:10.119733   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698797350.071514552
	
	I1101 00:09:10.119761   30593 fix.go:206] guest clock: 1698797350.071514552
	I1101 00:09:10.119769   30593 fix.go:219] Guest: 2023-11-01 00:09:10.071514552 +0000 UTC Remote: 2023-11-01 00:09:09.982007618 +0000 UTC m=+20.332511469 (delta=89.506934ms)
	I1101 00:09:10.119793   30593 fix.go:190] guest clock delta is within tolerance: 89.506934ms
	I1101 00:09:10.119800   30593 start.go:83] releasing machines lock for "multinode-391061", held for 20.317128044s
	I1101 00:09:10.119826   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:10.120083   30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
	I1101 00:09:10.122834   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:10.123267   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:10.123301   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:10.123482   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:10.124067   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:10.124267   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:10.124386   30593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:09:10.124433   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:10.124459   30593 ssh_runner.go:195] Run: cat /version.json
	I1101 00:09:10.124497   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:10.127197   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:10.127360   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:10.127632   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:10.127661   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:10.127789   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:10.127807   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:10.127837   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:10.127985   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:10.127991   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:10.128201   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:10.128203   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:10.128392   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:10.128400   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
	I1101 00:09:10.128527   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
	I1101 00:09:10.219062   30593 command_runner.go:130] > {"iso_version": "v1.32.0-1698773592-17486", "kicbase_version": "v0.0.41-1698660445-17527", "minikube_version": "v1.32.0-beta.0", "commit": "01e1cff766666ed9b9dd97c2a32d71cdb94ff3cf"}
	I1101 00:09:10.244630   30593 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1101 00:09:10.245754   30593 ssh_runner.go:195] Run: systemctl --version
	I1101 00:09:10.251311   30593 command_runner.go:130] > systemd 247 (247)
	I1101 00:09:10.251350   30593 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1101 00:09:10.251621   30593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 00:09:10.256782   30593 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1101 00:09:10.256835   30593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:09:10.256887   30593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:09:10.271406   30593 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1101 00:09:10.271460   30593 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 00:09:10.271470   30593 start.go:472] detecting cgroup driver to use...
	I1101 00:09:10.271565   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:09:10.288462   30593 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1101 00:09:10.288546   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1101 00:09:10.298090   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1101 00:09:10.307653   30593 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1101 00:09:10.307716   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1101 00:09:10.317073   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:09:10.326800   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1101 00:09:10.336055   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:09:10.345573   30593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:09:10.355553   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1101 00:09:10.365472   30593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:09:10.373896   30593 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1101 00:09:10.374055   30593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:09:10.382414   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:09:10.484557   30593 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 00:09:10.503546   30593 start.go:472] detecting cgroup driver to use...
	I1101 00:09:10.503677   30593 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 00:09:10.516143   30593 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1101 00:09:10.517085   30593 command_runner.go:130] > [Unit]
	I1101 00:09:10.517117   30593 command_runner.go:130] > Description=Docker Application Container Engine
	I1101 00:09:10.517127   30593 command_runner.go:130] > Documentation=https://docs.docker.com
	I1101 00:09:10.517135   30593 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1101 00:09:10.517143   30593 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1101 00:09:10.517151   30593 command_runner.go:130] > StartLimitBurst=3
	I1101 00:09:10.517159   30593 command_runner.go:130] > StartLimitIntervalSec=60
	I1101 00:09:10.517169   30593 command_runner.go:130] > [Service]
	I1101 00:09:10.517175   30593 command_runner.go:130] > Type=notify
	I1101 00:09:10.517185   30593 command_runner.go:130] > Restart=on-failure
	I1101 00:09:10.517197   30593 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1101 00:09:10.517218   30593 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1101 00:09:10.517247   30593 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1101 00:09:10.517256   30593 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1101 00:09:10.517266   30593 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1101 00:09:10.517276   30593 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1101 00:09:10.517285   30593 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1101 00:09:10.517306   30593 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1101 00:09:10.517318   30593 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1101 00:09:10.517328   30593 command_runner.go:130] > ExecStart=
	I1101 00:09:10.517356   30593 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1101 00:09:10.517369   30593 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1101 00:09:10.517383   30593 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1101 00:09:10.517397   30593 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1101 00:09:10.517408   30593 command_runner.go:130] > LimitNOFILE=infinity
	I1101 00:09:10.517415   30593 command_runner.go:130] > LimitNPROC=infinity
	I1101 00:09:10.517425   30593 command_runner.go:130] > LimitCORE=infinity
	I1101 00:09:10.517433   30593 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1101 00:09:10.517441   30593 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1101 00:09:10.517447   30593 command_runner.go:130] > TasksMax=infinity
	I1101 00:09:10.517454   30593 command_runner.go:130] > TimeoutStartSec=0
	I1101 00:09:10.517463   30593 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1101 00:09:10.517469   30593 command_runner.go:130] > Delegate=yes
	I1101 00:09:10.517477   30593 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1101 00:09:10.517488   30593 command_runner.go:130] > KillMode=process
	I1101 00:09:10.517502   30593 command_runner.go:130] > [Install]
	I1101 00:09:10.517521   30593 command_runner.go:130] > WantedBy=multi-user.target
	I1101 00:09:10.517760   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:09:10.537353   30593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:09:10.559962   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:09:10.572863   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:09:10.585294   30593 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 00:09:10.613156   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:09:10.626018   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:09:10.642949   30593 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1101 00:09:10.643493   30593 ssh_runner.go:195] Run: which cri-dockerd
	I1101 00:09:10.647034   30593 command_runner.go:130] > /usr/bin/cri-dockerd
	I1101 00:09:10.647148   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1101 00:09:10.656096   30593 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1101 00:09:10.672510   30593 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 00:09:10.775493   30593 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 00:09:10.890922   30593 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1101 00:09:10.891096   30593 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1101 00:09:10.911224   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:09:11.028462   30593 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 00:09:12.495501   30593 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.467002879s)
	I1101 00:09:12.495587   30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:09:12.596857   30593 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1101 00:09:12.696859   30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:09:12.818695   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:09:12.925882   30593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1101 00:09:12.942696   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:09:13.046788   30593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1101 00:09:13.125894   30593 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1101 00:09:13.125989   30593 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1101 00:09:13.131383   30593 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1101 00:09:13.131401   30593 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1101 00:09:13.131407   30593 command_runner.go:130] > Device: 16h/22d	Inode: 823         Links: 1
	I1101 00:09:13.131414   30593 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1101 00:09:13.131420   30593 command_runner.go:130] > Access: 2023-11-01 00:09:13.012751521 +0000
	I1101 00:09:13.131425   30593 command_runner.go:130] > Modify: 2023-11-01 00:09:13.012751521 +0000
	I1101 00:09:13.131432   30593 command_runner.go:130] > Change: 2023-11-01 00:09:13.015751521 +0000
	I1101 00:09:13.131448   30593 command_runner.go:130] >  Birth: -
	I1101 00:09:13.131608   30593 start.go:540] Will wait 60s for crictl version
	I1101 00:09:13.131663   30593 ssh_runner.go:195] Run: which crictl
	I1101 00:09:13.135151   30593 command_runner.go:130] > /usr/bin/crictl
	I1101 00:09:13.135210   30593 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:09:13.203365   30593 command_runner.go:130] > Version:  0.1.0
	I1101 00:09:13.203385   30593 command_runner.go:130] > RuntimeName:  docker
	I1101 00:09:13.203397   30593 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1101 00:09:13.203407   30593 command_runner.go:130] > RuntimeApiVersion:  v1
	I1101 00:09:13.203445   30593 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1101 00:09:13.203500   30593 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 00:09:13.228282   30593 command_runner.go:130] > 24.0.6
	I1101 00:09:13.228417   30593 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 00:09:13.252487   30593 command_runner.go:130] > 24.0.6
	I1101 00:09:13.254840   30593 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1101 00:09:13.254880   30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
	I1101 00:09:13.257487   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:13.257845   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:13.257879   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:13.258035   30593 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 00:09:13.261869   30593 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:09:13.272965   30593 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1101 00:09:13.273017   30593 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:09:13.291973   30593 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1101 00:09:13.292012   30593 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 00:09:13.292018   30593 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1101 00:09:13.292023   30593 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1101 00:09:13.292028   30593 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1101 00:09:13.292033   30593 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1101 00:09:13.292039   30593 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1101 00:09:13.292046   30593 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1101 00:09:13.292051   30593 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:09:13.292058   30593 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1101 00:09:13.292659   30593 docker.go:699] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1101 00:09:13.292679   30593 docker.go:629] Images already preloaded, skipping extraction
	I1101 00:09:13.292737   30593 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:09:13.311772   30593 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1101 00:09:13.311797   30593 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1101 00:09:13.311806   30593 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 00:09:13.311814   30593 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1101 00:09:13.311821   30593 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1101 00:09:13.311826   30593 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1101 00:09:13.311831   30593 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1101 00:09:13.311836   30593 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1101 00:09:13.311841   30593 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:09:13.311857   30593 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1101 00:09:13.311882   30593 docker.go:699] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1101 00:09:13.311900   30593 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:09:13.311963   30593 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 00:09:13.336389   30593 command_runner.go:130] > cgroupfs
	I1101 00:09:13.336458   30593 cni.go:84] Creating CNI manager for ""
	I1101 00:09:13.336469   30593 cni.go:136] 2 nodes found, recommending kindnet
	I1101 00:09:13.336493   30593 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:09:13.336521   30593 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-391061 NodeName:multinode-391061 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:09:13.336694   30593 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-391061"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:09:13.336788   30593 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-391061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-391061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:09:13.336851   30593 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:09:13.346367   30593 command_runner.go:130] > kubeadm
	I1101 00:09:13.346390   30593 command_runner.go:130] > kubectl
	I1101 00:09:13.346396   30593 command_runner.go:130] > kubelet
	I1101 00:09:13.346518   30593 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:09:13.346594   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:09:13.355275   30593 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 00:09:13.370971   30593 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:09:13.387036   30593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1101 00:09:13.402440   30593 ssh_runner.go:195] Run: grep 192.168.39.43	control-plane.minikube.internal$ /etc/hosts
	I1101 00:09:13.406022   30593 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.43	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:09:13.417070   30593 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061 for IP: 192.168.39.43
	I1101 00:09:13.417103   30593 certs.go:190] acquiring lock for shared ca certs: {Name:mkd78a553474b872bb63abf547b6fa0a317dc3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:09:13.417247   30593 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key
	I1101 00:09:13.417296   30593 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key
	I1101 00:09:13.417388   30593 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.key
	I1101 00:09:13.417450   30593 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.key.7e75dda5
	I1101 00:09:13.417508   30593 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.key
	I1101 00:09:13.417523   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 00:09:13.417544   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 00:09:13.417575   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 00:09:13.417593   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 00:09:13.417603   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 00:09:13.417615   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 00:09:13.417625   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 00:09:13.417636   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 00:09:13.417690   30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem (1338 bytes)
	W1101 00:09:13.417720   30593 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463_empty.pem, impossibly tiny 0 bytes
	I1101 00:09:13.417729   30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:09:13.417752   30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem (1082 bytes)
	I1101 00:09:13.417776   30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:09:13.417804   30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem (1675 bytes)
	I1101 00:09:13.417847   30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:09:13.417870   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem -> /usr/share/ca-certificates/14463.pem
	I1101 00:09:13.417882   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> /usr/share/ca-certificates/144632.pem
	I1101 00:09:13.417894   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:13.418474   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:09:13.440131   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 00:09:13.461354   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:09:13.484158   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 00:09:13.507642   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:09:13.530560   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 00:09:13.552173   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:09:13.572803   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:09:13.594200   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem --> /usr/share/ca-certificates/14463.pem (1338 bytes)
	I1101 00:09:13.614546   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /usr/share/ca-certificates/144632.pem (1708 bytes)
	I1101 00:09:13.635287   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:09:13.655804   30593 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:09:13.671160   30593 ssh_runner.go:195] Run: openssl version
	I1101 00:09:13.676595   30593 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1101 00:09:13.676661   30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14463.pem && ln -fs /usr/share/ca-certificates/14463.pem /etc/ssl/certs/14463.pem"
	I1101 00:09:13.687719   30593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14463.pem
	I1101 00:09:13.692306   30593 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 23:48 /usr/share/ca-certificates/14463.pem
	I1101 00:09:13.692356   30593 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:48 /usr/share/ca-certificates/14463.pem
	I1101 00:09:13.692398   30593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14463.pem
	I1101 00:09:13.697913   30593 command_runner.go:130] > 51391683
	I1101 00:09:13.698156   30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14463.pem /etc/ssl/certs/51391683.0"
	I1101 00:09:13.708708   30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144632.pem && ln -fs /usr/share/ca-certificates/144632.pem /etc/ssl/certs/144632.pem"
	I1101 00:09:13.718932   30593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144632.pem
	I1101 00:09:13.723625   30593 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 23:48 /usr/share/ca-certificates/144632.pem
	I1101 00:09:13.723665   30593 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:48 /usr/share/ca-certificates/144632.pem
	I1101 00:09:13.723717   30593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144632.pem
	I1101 00:09:13.729381   30593 command_runner.go:130] > 3ec20f2e
	I1101 00:09:13.729472   30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144632.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:09:13.739928   30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:09:13.749888   30593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:13.754135   30593 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:13.754186   30593 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:13.754224   30593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:13.759372   30593 command_runner.go:130] > b5213941
	I1101 00:09:13.759586   30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:09:13.770878   30593 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:09:13.774944   30593 command_runner.go:130] > ca.crt
	I1101 00:09:13.774961   30593 command_runner.go:130] > ca.key
	I1101 00:09:13.774966   30593 command_runner.go:130] > healthcheck-client.crt
	I1101 00:09:13.774977   30593 command_runner.go:130] > healthcheck-client.key
	I1101 00:09:13.774981   30593 command_runner.go:130] > peer.crt
	I1101 00:09:13.774985   30593 command_runner.go:130] > peer.key
	I1101 00:09:13.774988   30593 command_runner.go:130] > server.crt
	I1101 00:09:13.774993   30593 command_runner.go:130] > server.key
	I1101 00:09:13.775195   30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 00:09:13.780693   30593 command_runner.go:130] > Certificate will not expire
	I1101 00:09:13.781005   30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 00:09:13.786438   30593 command_runner.go:130] > Certificate will not expire
	I1101 00:09:13.786773   30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 00:09:13.792247   30593 command_runner.go:130] > Certificate will not expire
	I1101 00:09:13.792305   30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 00:09:13.797510   30593 command_runner.go:130] > Certificate will not expire
	I1101 00:09:13.797845   30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 00:09:13.803206   30593 command_runner.go:130] > Certificate will not expire
	I1101 00:09:13.803273   30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 00:09:13.808620   30593 command_runner.go:130] > Certificate will not expire
	I1101 00:09:13.808816   30593 kubeadm.go:404] StartCluster: {Name:multinode-391061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-391061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kube
virt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:09:13.808974   30593 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 00:09:13.826906   30593 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 00:09:13.836480   30593 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1101 00:09:13.836509   30593 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1101 00:09:13.836518   30593 command_runner.go:130] > /var/lib/minikube/etcd:
	I1101 00:09:13.836524   30593 command_runner.go:130] > member
	I1101 00:09:13.836597   30593 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 00:09:13.836612   30593 kubeadm.go:636] restartCluster start
	I1101 00:09:13.836669   30593 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 00:09:13.845747   30593 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:13.846165   30593 kubeconfig.go:135] verify returned: extract IP: "multinode-391061" does not appear in /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:09:13.846289   30593 kubeconfig.go:146] "multinode-391061" context is missing from /home/jenkins/minikube-integration/17486-7251/kubeconfig - will repair!
	I1101 00:09:13.846620   30593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:09:13.847028   30593 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:09:13.847260   30593 kapi.go:59] client config for multinode-391061: &rest.Config{Host:"https://192.168.39.43:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:09:13.847933   30593 cert_rotation.go:137] Starting client certificate rotation controller
	I1101 00:09:13.848016   30593 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 00:09:13.857014   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:13.857066   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:13.868306   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:13.868326   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:13.868365   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:13.879425   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:14.380169   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:14.380271   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:14.393563   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:14.879961   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:14.880030   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:14.891500   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:15.380030   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:15.380116   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:15.394849   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:15.880377   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:15.880462   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:15.892276   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:16.379827   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:16.379933   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:16.391756   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:16.880389   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:16.880484   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:16.892186   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:17.379748   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:17.379838   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:17.391913   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:17.880537   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:17.880630   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:17.893349   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:18.379933   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:18.380022   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:18.391643   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:18.880268   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:18.880355   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:18.892132   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:19.379676   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:19.379760   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:19.391501   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:19.880377   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:19.880494   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:19.892270   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:20.379875   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:20.379968   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:20.391559   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:20.880250   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:20.880355   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:20.891729   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:21.380337   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:21.380407   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:21.391986   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:21.879571   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:21.879681   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:21.891291   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:22.379884   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:22.379978   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:22.391825   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:22.880476   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:22.880570   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:22.892224   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:23.379724   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:23.379835   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:23.391883   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:23.857628   30593 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 00:09:23.857661   30593 kubeadm.go:1128] stopping kube-system containers ...
	I1101 00:09:23.857758   30593 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 00:09:23.879399   30593 command_runner.go:130] > c8ec107c7b83
	I1101 00:09:23.879423   30593 command_runner.go:130] > 8a050fec9e56
	I1101 00:09:23.879444   30593 command_runner.go:130] > 0922f8b627ba
	I1101 00:09:23.879448   30593 command_runner.go:130] > 7e5dd13abba8
	I1101 00:09:23.879453   30593 command_runner.go:130] > 717d368b8c2a
	I1101 00:09:23.879456   30593 command_runner.go:130] > beeaf0ac020b
	I1101 00:09:23.879460   30593 command_runner.go:130] > d52c65ebca75
	I1101 00:09:23.879464   30593 command_runner.go:130] > 5c355a51915e
	I1101 00:09:23.879467   30593 command_runner.go:130] > 6e72da581d8b
	I1101 00:09:23.879471   30593 command_runner.go:130] > 37d9dd0022b9
	I1101 00:09:23.879475   30593 command_runner.go:130] > c5ea3d84d06f
	I1101 00:09:23.879479   30593 command_runner.go:130] > 32294fac02b3
	I1101 00:09:23.879482   30593 command_runner.go:130] > a49a86a47d7c
	I1101 00:09:23.879486   30593 command_runner.go:130] > 36d5f0bd5cf2
	I1101 00:09:23.879494   30593 command_runner.go:130] > 92b70c8321ee
	I1101 00:09:23.879498   30593 command_runner.go:130] > 9f5176fde232
	I1101 00:09:23.879502   30593 command_runner.go:130] > f576715f1f47
	I1101 00:09:23.879506   30593 command_runner.go:130] > 44a2cc98732a
	I1101 00:09:23.879509   30593 command_runner.go:130] > 5a2e590156b6
	I1101 00:09:23.879518   30593 command_runner.go:130] > feea3a57d77e
	I1101 00:09:23.879525   30593 command_runner.go:130] > 7ad930b36263
	I1101 00:09:23.879528   30593 command_runner.go:130] > b110676d9563
	I1101 00:09:23.879533   30593 command_runner.go:130] > 8659d1168087
	I1101 00:09:23.879540   30593 command_runner.go:130] > 7f78495183a7
	I1101 00:09:23.879543   30593 command_runner.go:130] > 21b2a7338538
	I1101 00:09:23.879547   30593 command_runner.go:130] > 2b739c443c07
	I1101 00:09:23.879553   30593 command_runner.go:130] > f8c33525e5e4
	I1101 00:09:23.879557   30593 command_runner.go:130] > b6d83949182f
	I1101 00:09:23.879561   30593 command_runner.go:130] > 8dc7f1a0f0cf
	I1101 00:09:23.879565   30593 command_runner.go:130] > d114ab0f9727
	I1101 00:09:23.879569   30593 command_runner.go:130] > 88e660774880
	I1101 00:09:23.880506   30593 docker.go:470] Stopping containers: [c8ec107c7b83 8a050fec9e56 0922f8b627ba 7e5dd13abba8 717d368b8c2a beeaf0ac020b d52c65ebca75 5c355a51915e 6e72da581d8b 37d9dd0022b9 c5ea3d84d06f 32294fac02b3 a49a86a47d7c 36d5f0bd5cf2 92b70c8321ee 9f5176fde232 f576715f1f47 44a2cc98732a 5a2e590156b6 feea3a57d77e 7ad930b36263 b110676d9563 8659d1168087 7f78495183a7 21b2a7338538 2b739c443c07 f8c33525e5e4 b6d83949182f 8dc7f1a0f0cf d114ab0f9727 88e660774880]
	I1101 00:09:23.880594   30593 ssh_runner.go:195] Run: docker stop c8ec107c7b83 8a050fec9e56 0922f8b627ba 7e5dd13abba8 717d368b8c2a beeaf0ac020b d52c65ebca75 5c355a51915e 6e72da581d8b 37d9dd0022b9 c5ea3d84d06f 32294fac02b3 a49a86a47d7c 36d5f0bd5cf2 92b70c8321ee 9f5176fde232 f576715f1f47 44a2cc98732a 5a2e590156b6 feea3a57d77e 7ad930b36263 b110676d9563 8659d1168087 7f78495183a7 21b2a7338538 2b739c443c07 f8c33525e5e4 b6d83949182f 8dc7f1a0f0cf d114ab0f9727 88e660774880
	I1101 00:09:23.906747   30593 command_runner.go:130] > c8ec107c7b83
	I1101 00:09:23.906784   30593 command_runner.go:130] > 8a050fec9e56
	I1101 00:09:23.906790   30593 command_runner.go:130] > 0922f8b627ba
	I1101 00:09:23.906941   30593 command_runner.go:130] > 7e5dd13abba8
	I1101 00:09:23.907074   30593 command_runner.go:130] > 717d368b8c2a
	I1101 00:09:23.907086   30593 command_runner.go:130] > beeaf0ac020b
	I1101 00:09:23.907092   30593 command_runner.go:130] > d52c65ebca75
	I1101 00:09:23.907110   30593 command_runner.go:130] > 5c355a51915e
	I1101 00:09:23.907116   30593 command_runner.go:130] > 6e72da581d8b
	I1101 00:09:23.907123   30593 command_runner.go:130] > 37d9dd0022b9
	I1101 00:09:23.907130   30593 command_runner.go:130] > c5ea3d84d06f
	I1101 00:09:23.907139   30593 command_runner.go:130] > 32294fac02b3
	I1101 00:09:23.907146   30593 command_runner.go:130] > a49a86a47d7c
	I1101 00:09:23.907157   30593 command_runner.go:130] > 36d5f0bd5cf2
	I1101 00:09:23.907168   30593 command_runner.go:130] > 92b70c8321ee
	I1101 00:09:23.907176   30593 command_runner.go:130] > 9f5176fde232
	I1101 00:09:23.907188   30593 command_runner.go:130] > f576715f1f47
	I1101 00:09:23.907198   30593 command_runner.go:130] > 44a2cc98732a
	I1101 00:09:23.907202   30593 command_runner.go:130] > 5a2e590156b6
	I1101 00:09:23.907207   30593 command_runner.go:130] > feea3a57d77e
	I1101 00:09:23.907213   30593 command_runner.go:130] > 7ad930b36263
	I1101 00:09:23.907220   30593 command_runner.go:130] > b110676d9563
	I1101 00:09:23.907227   30593 command_runner.go:130] > 8659d1168087
	I1101 00:09:23.907238   30593 command_runner.go:130] > 7f78495183a7
	I1101 00:09:23.907244   30593 command_runner.go:130] > 21b2a7338538
	I1101 00:09:23.907254   30593 command_runner.go:130] > 2b739c443c07
	I1101 00:09:23.907263   30593 command_runner.go:130] > f8c33525e5e4
	I1101 00:09:23.907270   30593 command_runner.go:130] > b6d83949182f
	I1101 00:09:23.907278   30593 command_runner.go:130] > 8dc7f1a0f0cf
	I1101 00:09:23.907284   30593 command_runner.go:130] > d114ab0f9727
	I1101 00:09:23.907288   30593 command_runner.go:130] > 88e660774880
	I1101 00:09:23.908329   30593 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 00:09:23.924405   30593 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 00:09:23.933413   30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1101 00:09:23.933460   30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1101 00:09:23.933474   30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1101 00:09:23.933508   30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:09:23.933573   30593 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:09:23.933632   30593 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 00:09:23.942681   30593 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 00:09:23.942716   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:09:24.061200   30593 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 00:09:24.061740   30593 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1101 00:09:24.062273   30593 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1101 00:09:24.062864   30593 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 00:09:24.063543   30593 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1101 00:09:24.064483   30593 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1101 00:09:24.065146   30593 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1101 00:09:24.065723   30593 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1101 00:09:24.066240   30593 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1101 00:09:24.066826   30593 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 00:09:24.067296   30593 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 00:09:24.067896   30593 command_runner.go:130] > [certs] Using the existing "sa" key
	I1101 00:09:24.069200   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:09:24.889031   30593 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 00:09:24.889057   30593 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 00:09:24.889063   30593 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 00:09:24.889069   30593 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 00:09:24.889075   30593 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 00:09:24.889099   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:09:25.068922   30593 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:09:25.068953   30593 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:09:25.068959   30593 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1101 00:09:25.069343   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:09:25.134897   30593 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 00:09:25.134925   30593 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 00:09:25.141279   30593 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 00:09:25.148755   30593 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 00:09:25.153988   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:09:25.224920   30593 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 00:09:25.228266   30593 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:09:25.228336   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:25.246286   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:25.761474   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:26.261798   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:26.761515   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:27.261570   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:27.761008   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:27.804720   30593 command_runner.go:130] > 1704
	I1101 00:09:27.806000   30593 api_server.go:72] duration metric: took 2.577736282s to wait for apiserver process to appear ...
	I1101 00:09:27.806022   30593 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:09:27.806041   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:27.806649   30593 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
	I1101 00:09:27.806703   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:27.807202   30593 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
	I1101 00:09:28.307960   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:31.401471   30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 00:09:31.401504   30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 00:09:31.401515   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:31.478349   30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 00:09:31.478386   30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 00:09:31.807657   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:31.816386   30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 00:09:31.816421   30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 00:09:32.308084   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:32.313351   30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 00:09:32.313393   30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 00:09:32.807687   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:32.814924   30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I1101 00:09:32.815019   30593 round_trippers.go:463] GET https://192.168.39.43:8443/version
	I1101 00:09:32.815029   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:32.815039   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:32.815049   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:32.823839   30593 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1101 00:09:32.823862   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:32.823873   30593 round_trippers.go:580]     Audit-Id: 654a1cb8-a85b-41cb-aea3-21ea6bc79004
	I1101 00:09:32.823885   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:32.823891   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:32.823898   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:32.823905   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:32.823913   30593 round_trippers.go:580]     Content-Length: 264
	I1101 00:09:32.823921   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:32 GMT
	I1101 00:09:32.823947   30593 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1101 00:09:32.824032   30593 api_server.go:141] control plane version: v1.28.3
	I1101 00:09:32.824050   30593 api_server.go:131] duration metric: took 5.018019595s to wait for apiserver health ...
	I1101 00:09:32.824061   30593 cni.go:84] Creating CNI manager for ""
	I1101 00:09:32.824070   30593 cni.go:136] 2 nodes found, recommending kindnet
	I1101 00:09:32.826169   30593 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1101 00:09:32.827914   30593 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 00:09:32.841919   30593 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1101 00:09:32.841942   30593 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1101 00:09:32.841948   30593 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1101 00:09:32.841955   30593 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 00:09:32.841960   30593 command_runner.go:130] > Access: 2023-11-01 00:09:01.939751521 +0000
	I1101 00:09:32.841969   30593 command_runner.go:130] > Modify: 2023-10-31 23:04:20.000000000 +0000
	I1101 00:09:32.841974   30593 command_runner.go:130] > Change: 2023-11-01 00:09:00.154751521 +0000
	I1101 00:09:32.841979   30593 command_runner.go:130] >  Birth: -
	I1101 00:09:32.843041   30593 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1101 00:09:32.843061   30593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1101 00:09:32.868639   30593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 00:09:34.233741   30593 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1101 00:09:34.264714   30593 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1101 00:09:34.269029   30593 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1101 00:09:34.306476   30593 command_runner.go:130] > daemonset.apps/kindnet configured
	I1101 00:09:34.313598   30593 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.44492846s)
	I1101 00:09:34.313628   30593 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:09:34.313739   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:34.313753   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.313764   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.313774   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.328832   30593 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1101 00:09:34.328855   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.328863   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.328871   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.328944   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.328962   30593 round_trippers.go:580]     Audit-Id: 9a80f099-79a4-48ce-bc32-9266f1c0dc9f
	I1101 00:09:34.328971   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.328985   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.330618   30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1205"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84772 chars]
	I1101 00:09:34.334579   30593 system_pods.go:59] 12 kube-system pods found
	I1101 00:09:34.334612   30593 system_pods.go:61] "coredns-5dd5756b68-dg5w7" [eb94555e-1465-4dec-9d6d-ebcbec02841e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 00:09:34.334627   30593 system_pods.go:61] "etcd-multinode-391061" [0537cc4c-2127-4424-b02f-9e4747bc8713] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 00:09:34.334633   30593 system_pods.go:61] "kindnet-4jfj9" [2559e20b-85cf-43d5-8663-7ec855d71df9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 00:09:34.334638   30593 system_pods.go:61] "kindnet-lcljq" [171d5f22-d781-4224-88f7-f940ad9e747b] Running
	I1101 00:09:34.334642   30593 system_pods.go:61] "kindnet-wrdhd" [85db010e-82bd-4efa-a760-0669bf1e52de] Running
	I1101 00:09:34.334649   30593 system_pods.go:61] "kube-apiserver-multinode-391061" [dff82899-3db2-46a2-aea0-ec57d58be1c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 00:09:34.334659   30593 system_pods.go:61] "kube-controller-manager-multinode-391061" [4775e566-6acd-43ac-b7cd-8dbd245c33cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 00:09:34.334666   30593 system_pods.go:61] "kube-proxy-clsrp" [a747b091-d679-4ae6-a995-c980235c9a61] Running
	I1101 00:09:34.334670   30593 system_pods.go:61] "kube-proxy-rcnv9" [9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9] Running
	I1101 00:09:34.334674   30593 system_pods.go:61] "kube-proxy-vdjh2" [9838a111-09e4-4975-b925-1ae5dcfa7334] Running
	I1101 00:09:34.334679   30593 system_pods.go:61] "kube-scheduler-multinode-391061" [eaf767ff-8f68-4b91-bcd7-b550481a6155] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 00:09:34.334685   30593 system_pods.go:61] "storage-provisioner" [b0b970e9-7d0b-4e94-8ca8-2f3348eaf579] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 00:09:34.334691   30593 system_pods.go:74] duration metric: took 21.056413ms to wait for pod list to return data ...
	I1101 00:09:34.334704   30593 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:09:34.334757   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes
	I1101 00:09:34.334764   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.334771   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.334777   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.340145   30593 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1101 00:09:34.340163   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.340169   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.340175   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.340180   30593 round_trippers.go:580]     Audit-Id: 1531eb5d-604e-4c94-96b1-59616ac61bc1
	I1101 00:09:34.340185   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.340189   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.340199   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.340500   30593 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1205"},"items":[{"metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 9590 chars]
	I1101 00:09:34.341106   30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:09:34.341127   30593 node_conditions.go:123] node cpu capacity is 2
	I1101 00:09:34.341135   30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:09:34.341139   30593 node_conditions.go:123] node cpu capacity is 2
	I1101 00:09:34.341143   30593 node_conditions.go:105] duration metric: took 6.435475ms to run NodePressure ...
	I1101 00:09:34.341158   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:09:34.596643   30593 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1101 00:09:34.664781   30593 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1101 00:09:34.667106   30593 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 00:09:34.667212   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1101 00:09:34.667221   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.667228   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.667234   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.673886   30593 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1101 00:09:34.673905   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.673912   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.673918   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.673923   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.673936   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.673941   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.673946   30593 round_trippers.go:580]     Audit-Id: 7dc67d14-eb2e-46d1-aa78-54d52af1af34
	I1101 00:09:34.675336   30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"etcd-multinode-391061","namespace":"kube-system","uid":"0537cc4c-2127-4424-b02f-9e4747bc8713","resourceVersion":"1180","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.43:2379","kubernetes.io/config.hash":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.mirror":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.seen":"2023-11-01T00:02:21.059094445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29766 chars]
	I1101 00:09:34.676627   30593 kubeadm.go:787] kubelet initialised
	I1101 00:09:34.676644   30593 kubeadm.go:788] duration metric: took 9.518378ms waiting for restarted kubelet to initialise ...
	I1101 00:09:34.676651   30593 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:09:34.676705   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:34.676713   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.676720   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.676728   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.683293   30593 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1101 00:09:34.683308   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.683315   30593 round_trippers.go:580]     Audit-Id: b0192f99-985e-4aae-927b-c47d95fe8014
	I1101 00:09:34.683321   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.683327   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.683332   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.683338   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.683350   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.685550   30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84772 chars]
	I1101 00:09:34.688329   30593 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.688397   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:34.688408   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.688416   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.688421   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.698455   30593 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1101 00:09:34.699740   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.699755   30593 round_trippers.go:580]     Audit-Id: eb7d9633-7fab-456d-a9f4-795f402a1e5a
	I1101 00:09:34.699764   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.699774   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.699785   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.699794   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.699803   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.699985   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:34.700490   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:34.700507   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.700517   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.700526   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.713644   30593 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1101 00:09:34.713666   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.713679   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.713686   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.713694   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.713702   30593 round_trippers.go:580]     Audit-Id: ee2f8b85-6ebc-4ce5-b02d-f9b38983f319
	I1101 00:09:34.713710   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.713722   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.713963   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:34.714314   30593 pod_ready.go:97] node "multinode-391061" hosting pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.714332   30593 pod_ready.go:81] duration metric: took 25.984465ms waiting for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:34.714343   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.714355   30593 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.714451   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-391061
	I1101 00:09:34.714465   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.714476   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.714486   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.716800   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:34.716818   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.716827   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.716838   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.716846   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.716854   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.716866   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.716879   30593 round_trippers.go:580]     Audit-Id: 0183d545-7a83-4bf3-bb19-280d54d90e72
	I1101 00:09:34.717288   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-391061","namespace":"kube-system","uid":"0537cc4c-2127-4424-b02f-9e4747bc8713","resourceVersion":"1180","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.43:2379","kubernetes.io/config.hash":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.mirror":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.seen":"2023-11-01T00:02:21.059094445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6296 chars]
	I1101 00:09:34.717688   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:34.717702   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.717708   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.717715   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.719608   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:34.719624   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.719632   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.719640   30593 round_trippers.go:580]     Audit-Id: cc656017-62ca-46cc-93aa-6f56e0bacf57
	I1101 00:09:34.719647   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.719655   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.719663   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.719673   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.719831   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:34.720155   30593 pod_ready.go:97] node "multinode-391061" hosting pod "etcd-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.720173   30593 pod_ready.go:81] duration metric: took 5.809883ms waiting for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:34.720181   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "etcd-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.720222   30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.720281   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:34.720291   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.720302   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.720316   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.727693   30593 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1101 00:09:34.727724   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.727735   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.727746   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.727757   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.727768   30593 round_trippers.go:580]     Audit-Id: f429dcbd-b1c6-47e9-b094-3b51b74fd598
	I1101 00:09:34.727779   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.727790   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.727953   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:34.728461   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:34.728479   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.728490   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.728500   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.730599   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:34.730613   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.730619   30593 round_trippers.go:580]     Audit-Id: 0de3f8aa-089c-4434-b8d3-d71e99713bfd
	I1101 00:09:34.730624   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.730632   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.730644   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.730660   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.730670   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.730850   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:34.731213   30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-apiserver-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.731234   30593 pod_ready.go:81] duration metric: took 11.0013ms waiting for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:34.731247   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-apiserver-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.731266   30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.731321   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-391061
	I1101 00:09:34.731332   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.731342   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.731350   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.735460   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:34.735475   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.735481   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.735488   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.735501   30593 round_trippers.go:580]     Audit-Id: 2bd7494f-9968-4fd2-aca0-bb70496933d6
	I1101 00:09:34.735518   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.735525   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.735540   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.735848   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-391061","namespace":"kube-system","uid":"4775e566-6acd-43ac-b7cd-8dbd245c33cf","resourceVersion":"1178","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.mirror":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059092388Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1101 00:09:34.736287   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:34.736300   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.736307   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.736315   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.738460   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:34.738480   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.738490   30593 round_trippers.go:580]     Audit-Id: b9555108-2183-46ca-b82f-b9cd6213e770
	I1101 00:09:34.738511   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.738524   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.738532   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.738547   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.738555   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.738690   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:34.739057   30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-controller-manager-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.739086   30593 pod_ready.go:81] duration metric: took 7.809638ms waiting for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:34.739103   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-controller-manager-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.739113   30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.914034   30593 request.go:629] Waited for 174.835524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clsrp
	I1101 00:09:34.914109   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clsrp
	I1101 00:09:34.914114   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.914121   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.914131   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.916919   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:34.916946   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.916955   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.916964   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.916972   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.916983   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.916990   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.917003   30593 round_trippers.go:580]     Audit-Id: 7b74a314-8cec-4d22-9be3-8af74ba926c4
	I1101 00:09:34.917222   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-clsrp","generateName":"kube-proxy-","namespace":"kube-system","uid":"a747b091-d679-4ae6-a995-c980235c9a61","resourceVersion":"1203","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
	I1101 00:09:35.113972   30593 request.go:629] Waited for 196.314968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:35.114094   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:35.114106   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:35.114117   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:35.114128   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:35.116700   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:35.116727   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:35.116736   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:35.116744   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:35 GMT
	I1101 00:09:35.116752   30593 round_trippers.go:580]     Audit-Id: 520e1602-a5d2-496e-9336-3d05ae9bf431
	I1101 00:09:35.116760   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:35.116769   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:35.116778   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:35.116880   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:35.117203   30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-proxy-clsrp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:35.117220   30593 pod_ready.go:81] duration metric: took 378.09771ms waiting for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:35.117234   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-proxy-clsrp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:35.117249   30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:35.314720   30593 request.go:629] Waited for 197.37685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
	I1101 00:09:35.314784   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
	I1101 00:09:35.314790   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:35.314797   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:35.314806   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:35.317474   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:35.317495   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:35.317502   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:35 GMT
	I1101 00:09:35.317508   30593 round_trippers.go:580]     Audit-Id: 9af5c93f-eeb8-4bf5-91cf-0004ad594526
	I1101 00:09:35.317513   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:35.317526   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:35.317532   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:35.317537   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:35.317656   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rcnv9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9","resourceVersion":"983","creationTimestamp":"2023-11-01T00:03:22Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:03:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5749 chars]
	I1101 00:09:35.514541   30593 request.go:629] Waited for 196.422301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
	I1101 00:09:35.514605   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
	I1101 00:09:35.514610   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:35.514620   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:35.514626   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:35.516964   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:35.516981   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:35.516987   30593 round_trippers.go:580]     Audit-Id: f60ca5be-eff7-45b6-b4ef-25a4244f2ac8
	I1101 00:09:35.516992   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:35.516999   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:35.517007   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:35.517016   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:35.517024   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:35 GMT
	I1101 00:09:35.517144   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061-m02","uid":"75fe164a-6fd6-4525-bacf-d792a509255b","resourceVersion":"999","creationTimestamp":"2023-11-01T00:07:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3253 chars]
	I1101 00:09:35.517386   30593 pod_ready.go:92] pod "kube-proxy-rcnv9" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:35.517399   30593 pod_ready.go:81] duration metric: took 400.144025ms waiting for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:35.517407   30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:35.713801   30593 request.go:629] Waited for 196.321571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
	I1101 00:09:35.713897   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
	I1101 00:09:35.713902   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:35.713912   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:35.713919   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:35.718570   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:35.718593   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:35.718599   30593 round_trippers.go:580]     Audit-Id: a80b7d1f-2804-4453-9d76-e2f5feeecd8b
	I1101 00:09:35.718604   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:35.718609   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:35.718614   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:35.718619   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:35.718624   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:35 GMT
	I1101 00:09:35.719017   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vdjh2","generateName":"kube-proxy-","namespace":"kube-system","uid":"9838a111-09e4-4975-b925-1ae5dcfa7334","resourceVersion":"1096","creationTimestamp":"2023-11-01T00:04:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1101 00:09:35.914812   30593 request.go:629] Waited for 195.361033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
	I1101 00:09:35.914878   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
	I1101 00:09:35.914884   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:35.914892   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:35.914905   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:35.918630   30593 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I1101 00:09:35.918651   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:35.918658   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:35.918669   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:35.918675   30593 round_trippers.go:580]     Content-Length: 210
	I1101 00:09:35.918680   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:35 GMT
	I1101 00:09:35.918685   30593 round_trippers.go:580]     Audit-Id: 8559bcdf-7ea2-4533-82a7-71b9489af62e
	I1101 00:09:35.918693   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:35.918698   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:35.918716   30593 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-391061-m03\" not found","reason":"NotFound","details":{"name":"multinode-391061-m03","kind":"nodes"},"code":404}
	I1101 00:09:35.918899   30593 pod_ready.go:97] node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
	I1101 00:09:35.918915   30593 pod_ready.go:81] duration metric: took 401.503391ms waiting for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:35.918928   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
	I1101 00:09:35.918938   30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:36.114381   30593 request.go:629] Waited for 195.370649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:36.114441   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:36.114446   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:36.114453   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:36.114459   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:36.117280   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:36.117299   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:36.117305   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:36.117310   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:36.117316   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:36.117324   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:36 GMT
	I1101 00:09:36.117332   30593 round_trippers.go:580]     Audit-Id: 1a904aba-8eb8-4b24-84bc-bed0f6168940
	I1101 00:09:36.117345   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:36.117488   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1187","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I1101 00:09:36.314311   30593 request.go:629] Waited for 196.435913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:36.314416   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:36.314424   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:36.314432   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:36.314438   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:36.317156   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:36.317180   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:36.317187   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:36.317193   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:36 GMT
	I1101 00:09:36.317198   30593 round_trippers.go:580]     Audit-Id: 438f8f57-c6d3-4b09-82e1-c9c57e8542d5
	I1101 00:09:36.317207   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:36.317226   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:36.317232   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:36.317370   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:36.317685   30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-scheduler-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:36.317702   30593 pod_ready.go:81] duration metric: took 398.74998ms waiting for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:36.317710   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-scheduler-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:36.317717   30593 pod_ready.go:38] duration metric: took 1.641059341s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:09:36.317736   30593 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 00:09:36.328581   30593 command_runner.go:130] > -16
	I1101 00:09:36.329017   30593 ops.go:34] apiserver oom_adj: -16
	I1101 00:09:36.329031   30593 kubeadm.go:640] restartCluster took 22.492412523s
	I1101 00:09:36.329039   30593 kubeadm.go:406] StartCluster complete in 22.520229717s
	I1101 00:09:36.329066   30593 settings.go:142] acquiring lock: {Name:mk57c659cffa0c6a1b184e5906c662f85ff8a099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:09:36.329145   30593 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:09:36.329734   30593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:09:36.329976   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 00:09:36.330139   30593 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 00:09:36.330259   30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:09:36.332516   30593 out.go:177] * Enabled addons: 
	I1101 00:09:36.330334   30593 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:09:36.334140   30593 addons.go:502] enable addons completed in 4.002956ms: enabled=[]
	I1101 00:09:36.332878   30593 kapi.go:59] client config for multinode-391061: &rest.Config{Host:"https://192.168.39.43:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:09:36.334423   30593 round_trippers.go:463] GET https://192.168.39.43:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 00:09:36.334436   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:36.334446   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:36.334454   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:36.337955   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:36.337986   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:36.337996   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:36.338004   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:36.338012   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:36.338027   30593 round_trippers.go:580]     Content-Length: 292
	I1101 00:09:36.338038   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:36 GMT
	I1101 00:09:36.338050   30593 round_trippers.go:580]     Audit-Id: 9324051b-7b18-4bb3-a5fe-00967444602f
	I1101 00:09:36.338061   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:36.338088   30593 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0a6ee33a-4e79-49d5-be0e-4e19b76eb2c6","resourceVersion":"1206","creationTimestamp":"2023-11-01T00:02:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1101 00:09:36.338210   30593 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-391061" context rescaled to 1 replicas
	I1101 00:09:36.338240   30593 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 00:09:36.340479   30593 out.go:177] * Verifying Kubernetes components...
	I1101 00:09:36.342243   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:09:36.464070   30593 command_runner.go:130] > apiVersion: v1
	I1101 00:09:36.464088   30593 command_runner.go:130] > data:
	I1101 00:09:36.464092   30593 command_runner.go:130] >   Corefile: |
	I1101 00:09:36.464096   30593 command_runner.go:130] >     .:53 {
	I1101 00:09:36.464099   30593 command_runner.go:130] >         log
	I1101 00:09:36.464104   30593 command_runner.go:130] >         errors
	I1101 00:09:36.464108   30593 command_runner.go:130] >         health {
	I1101 00:09:36.464112   30593 command_runner.go:130] >            lameduck 5s
	I1101 00:09:36.464116   30593 command_runner.go:130] >         }
	I1101 00:09:36.464124   30593 command_runner.go:130] >         ready
	I1101 00:09:36.464129   30593 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1101 00:09:36.464134   30593 command_runner.go:130] >            pods insecure
	I1101 00:09:36.464139   30593 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1101 00:09:36.464143   30593 command_runner.go:130] >            ttl 30
	I1101 00:09:36.464147   30593 command_runner.go:130] >         }
	I1101 00:09:36.464151   30593 command_runner.go:130] >         prometheus :9153
	I1101 00:09:36.464154   30593 command_runner.go:130] >         hosts {
	I1101 00:09:36.464159   30593 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1101 00:09:36.464163   30593 command_runner.go:130] >            fallthrough
	I1101 00:09:36.464167   30593 command_runner.go:130] >         }
	I1101 00:09:36.464175   30593 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1101 00:09:36.464180   30593 command_runner.go:130] >            max_concurrent 1000
	I1101 00:09:36.464184   30593 command_runner.go:130] >         }
	I1101 00:09:36.464188   30593 command_runner.go:130] >         cache 30
	I1101 00:09:36.464193   30593 command_runner.go:130] >         loop
	I1101 00:09:36.464198   30593 command_runner.go:130] >         reload
	I1101 00:09:36.464202   30593 command_runner.go:130] >         loadbalance
	I1101 00:09:36.464217   30593 command_runner.go:130] >     }
	I1101 00:09:36.464224   30593 command_runner.go:130] > kind: ConfigMap
	I1101 00:09:36.464228   30593 command_runner.go:130] > metadata:
	I1101 00:09:36.464233   30593 command_runner.go:130] >   creationTimestamp: "2023-11-01T00:02:20Z"
	I1101 00:09:36.464237   30593 command_runner.go:130] >   name: coredns
	I1101 00:09:36.464242   30593 command_runner.go:130] >   namespace: kube-system
	I1101 00:09:36.464246   30593 command_runner.go:130] >   resourceVersion: "404"
	I1101 00:09:36.464251   30593 command_runner.go:130] >   uid: 9916bcab-f9a6-4b1c-a0a4-a33e2e2f738c
	I1101 00:09:36.466580   30593 node_ready.go:35] waiting up to 6m0s for node "multinode-391061" to be "Ready" ...
	I1101 00:09:36.466667   30593 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1101 00:09:36.513888   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:36.513918   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:36.513926   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:36.513933   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:36.516967   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:36.516991   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:36.517002   30593 round_trippers.go:580]     Audit-Id: 4d84eb47-da1a-4fd0-96d7-b23c142dcf7c
	I1101 00:09:36.517010   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:36.517018   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:36.517030   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:36.517038   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:36.517064   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:36 GMT
	I1101 00:09:36.517425   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:36.714232   30593 request.go:629] Waited for 196.4313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:36.714301   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:36.714308   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:36.714319   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:36.714329   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:36.716978   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:36.716999   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:36.717006   30593 round_trippers.go:580]     Audit-Id: 043fbdbd-3263-4587-9070-be445407c188
	I1101 00:09:36.717012   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:36.717017   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:36.717022   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:36.717027   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:36.717035   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:36 GMT
	I1101 00:09:36.717202   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:37.218413   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:37.218434   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:37.218447   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:37.218453   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:37.222719   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:37.222748   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:37.222759   30593 round_trippers.go:580]     Audit-Id: 917dad8e-af16-42b6-88ae-5dcab424bb1e
	I1101 00:09:37.222768   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:37.222778   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:37.222790   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:37.222802   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:37.222813   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:37 GMT
	I1101 00:09:37.223475   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:37.718082   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:37.718126   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:37.718135   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:37.718141   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:37.721049   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:37.721077   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:37.721088   30593 round_trippers.go:580]     Audit-Id: 06dcc7c1-bdd2-4e9f-870d-80146268aafa
	I1101 00:09:37.721101   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:37.721121   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:37.721130   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:37.721139   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:37.721148   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:37 GMT
	I1101 00:09:37.721272   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:38.218868   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:38.218893   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.218903   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.218912   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.222059   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:38.222083   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.222105   30593 round_trippers.go:580]     Audit-Id: ad14bc98-1add-4a13-8ab1-495ec6575c6e
	I1101 00:09:38.222111   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.222116   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.222121   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.222126   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.222131   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.222638   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:38.718331   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:38.718356   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.718364   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.718370   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.721280   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:38.721307   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.721314   30593 round_trippers.go:580]     Audit-Id: 32a342cc-ec48-43cc-b0f0-efe6838ba34f
	I1101 00:09:38.721319   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.721324   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.721329   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.721334   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.721339   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.721695   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:38.722003   30593 node_ready.go:49] node "multinode-391061" has status "Ready":"True"
	I1101 00:09:38.722018   30593 node_ready.go:38] duration metric: took 2.255410222s waiting for node "multinode-391061" to be "Ready" ...
	I1101 00:09:38.722030   30593 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:09:38.722093   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:38.722102   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.722113   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.722121   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.726178   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:38.726200   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.726211   30593 round_trippers.go:580]     Audit-Id: d4651bc2-6bb9-4745-9c25-8f2b530c877c
	I1101 00:09:38.726220   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.726227   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.726236   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.726244   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.726253   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.727979   30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1218"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84372 chars]
	I1101 00:09:38.731666   30593 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:38.731777   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:38.731788   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.731797   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.731804   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.734353   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:38.734368   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.734375   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.734380   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.734386   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.734391   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.734396   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.734401   30593 round_trippers.go:580]     Audit-Id: f0f6d35c-893f-4b34-bb39-154e16bedbe1
	I1101 00:09:38.734672   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:38.735183   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:38.735200   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.735208   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.735214   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.737368   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:38.737382   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.737388   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.737393   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.737398   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.737405   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.737418   30593 round_trippers.go:580]     Audit-Id: f978b19f-d984-48d1-b95c-0f850f106969
	I1101 00:09:38.737423   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.737700   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:38.738062   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:38.738078   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.738086   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.738092   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.740363   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:38.740379   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.740385   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.740390   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.740395   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.740408   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.740418   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.740423   30593 round_trippers.go:580]     Audit-Id: c33f3cc3-4753-4832-a887-2f2bce060625
	I1101 00:09:38.740727   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:38.741200   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:38.741213   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.741220   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.741226   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.743369   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:38.743385   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.743392   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.743397   30593 round_trippers.go:580]     Audit-Id: ccc0a48d-0d10-468a-a49f-71ad3ebd3363
	I1101 00:09:38.743402   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.743407   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.743414   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.743419   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.743797   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:39.244680   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:39.244705   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:39.244713   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:39.244719   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:39.249913   30593 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1101 00:09:39.249935   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:39.249943   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:39.249948   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:39.249954   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:39.249959   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:39.249964   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:39 GMT
	I1101 00:09:39.249971   30593 round_trippers.go:580]     Audit-Id: 12d94c73-c75e-46e9-871a-9b74acd630d6
	I1101 00:09:39.250237   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:39.250731   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:39.250745   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:39.250754   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:39.250760   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:39.253732   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:39.253752   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:39.253761   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:39.253770   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:39 GMT
	I1101 00:09:39.253778   30593 round_trippers.go:580]     Audit-Id: 2a48db27-174b-4246-a989-ca7f61b115f9
	I1101 00:09:39.253787   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:39.253793   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:39.253798   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:39.254037   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:39.744690   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:39.744715   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:39.744724   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:39.744729   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:39.748026   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:39.748050   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:39.748060   30593 round_trippers.go:580]     Audit-Id: d31dc218-4603-4f82-a559-2e3697ff06e2
	I1101 00:09:39.748072   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:39.748080   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:39.748087   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:39.748098   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:39.748105   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:39 GMT
	I1101 00:09:39.748732   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:39.749181   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:39.749196   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:39.749206   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:39.749215   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:39.751958   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:39.751980   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:39.751989   30593 round_trippers.go:580]     Audit-Id: b460f490-de79-4762-b30a-6cdd07942ced
	I1101 00:09:39.751997   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:39.752005   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:39.752015   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:39.752021   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:39.752029   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:39 GMT
	I1101 00:09:39.752310   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:40.244413   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:40.244438   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:40.244446   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:40.244452   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:40.248489   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:40.248512   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:40.248521   30593 round_trippers.go:580]     Audit-Id: ccff4954-c9ff-4a7f-9536-aa2b767dc311
	I1101 00:09:40.248528   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:40.248533   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:40.248538   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:40.248544   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:40.248549   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:40 GMT
	I1101 00:09:40.248729   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:40.249180   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:40.249194   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:40.249201   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:40.249209   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:40.252171   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:40.252188   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:40.252194   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:40.252199   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:40.252203   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:40.252208   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:40.252213   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:40 GMT
	I1101 00:09:40.252218   30593 round_trippers.go:580]     Audit-Id: ca95e9f6-880f-4555-aa29-16a66b7bf628
	I1101 00:09:40.252484   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:40.745314   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:40.745341   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:40.745350   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:40.745357   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:40.747878   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:40.747895   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:40.747902   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:40.747910   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:40.747924   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:40 GMT
	I1101 00:09:40.747932   30593 round_trippers.go:580]     Audit-Id: b88089ad-e6cf-4b38-b7fb-da565b4e5c79
	I1101 00:09:40.747940   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:40.747951   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:40.748125   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:40.748587   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:40.748601   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:40.748611   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:40.748617   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:40.750689   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:40.750703   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:40.750710   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:40 GMT
	I1101 00:09:40.750721   30593 round_trippers.go:580]     Audit-Id: 3a208361-9be9-4a15-8f86-f26ff624d9b3
	I1101 00:09:40.750729   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:40.750736   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:40.750744   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:40.750755   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:40.750912   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:40.751208   30593 pod_ready.go:102] pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace has status "Ready":"False"
	I1101 00:09:41.244531   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:41.244555   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:41.244563   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:41.244569   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:41.247236   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:41.247254   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:41.247264   30593 round_trippers.go:580]     Audit-Id: 0a7a1192-7352-4f99-a239-ebbd6ca40e85
	I1101 00:09:41.247272   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:41.247279   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:41.247289   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:41.247298   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:41.247318   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:41 GMT
	I1101 00:09:41.247449   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:41.247870   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:41.247882   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:41.247889   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:41.247894   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:41.250080   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:41.250098   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:41.250104   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:41.250109   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:41 GMT
	I1101 00:09:41.250114   30593 round_trippers.go:580]     Audit-Id: 629d69c5-3174-4a7d-aa0d-8f22f6d5b2f6
	I1101 00:09:41.250130   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:41.250138   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:41.250146   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:41.250326   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:41.745038   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:41.745066   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:41.745074   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:41.745080   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:41.748544   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:41.748570   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:41.748581   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:41.748590   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:41.748598   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:41.748606   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:41 GMT
	I1101 00:09:41.748625   30593 round_trippers.go:580]     Audit-Id: b22bcb01-f5bf-4a1d-aad0-6c0ab2d577d4
	I1101 00:09:41.748637   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:41.748855   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:41.749306   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:41.749318   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:41.749325   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:41.749331   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:41.755594   30593 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1101 00:09:41.755639   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:41.755649   30593 round_trippers.go:580]     Audit-Id: a64448a4-caec-4cfe-9700-2fbbc35230d2
	I1101 00:09:41.755657   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:41.755665   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:41.755673   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:41.755680   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:41.755695   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:41 GMT
	I1101 00:09:41.755860   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:42.244432   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:42.244456   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.244464   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.244470   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.247204   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:42.247227   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.247238   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.247247   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.247256   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.247267   30593 round_trippers.go:580]     Audit-Id: 003f9883-5c30-40fd-aa1f-88b585473b07
	I1101 00:09:42.247272   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.247278   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.247475   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1232","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I1101 00:09:42.248064   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:42.248082   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.248093   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.248100   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.251135   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:42.251152   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.251158   30593 round_trippers.go:580]     Audit-Id: 1d944e3b-2b90-4cb4-b54e-e4dc8e023493
	I1101 00:09:42.251168   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.251172   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.251177   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.251182   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.251187   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.251385   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:42.251763   30593 pod_ready.go:92] pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:42.251782   30593 pod_ready.go:81] duration metric: took 3.52008861s waiting for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:42.251794   30593 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:42.251868   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-391061
	I1101 00:09:42.251880   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.251891   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.251901   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.253932   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:42.253950   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.253957   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.253962   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.253967   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.253975   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.253980   30593 round_trippers.go:580]     Audit-Id: 8a73d4e8-1e4e-4883-908a-5c09ce62f8c3
	I1101 00:09:42.253985   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.254150   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-391061","namespace":"kube-system","uid":"0537cc4c-2127-4424-b02f-9e4747bc8713","resourceVersion":"1227","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.43:2379","kubernetes.io/config.hash":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.mirror":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.seen":"2023-11-01T00:02:21.059094445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6072 chars]
	I1101 00:09:42.254640   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:42.254655   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.254674   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.254685   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.256694   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:42.256708   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.256715   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.256723   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.256731   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.256740   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.256749   30593 round_trippers.go:580]     Audit-Id: 4c1b620e-fff1-4494-89d2-83c513fc0fc0
	I1101 00:09:42.256757   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.256951   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:42.257268   30593 pod_ready.go:92] pod "etcd-multinode-391061" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:42.257283   30593 pod_ready.go:81] duration metric: took 5.477797ms waiting for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:42.257306   30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:42.257369   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:42.257379   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.257390   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.257399   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.259467   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:42.259483   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.259492   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.259499   30593 round_trippers.go:580]     Audit-Id: 05d95e16-1d4e-4f81-a9d5-b2b141ff765d
	I1101 00:09:42.259508   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.259517   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.259526   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.259535   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.259733   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:42.260255   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:42.260274   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.260281   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.260287   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.262250   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:42.262265   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.262275   30593 round_trippers.go:580]     Audit-Id: ff748f0c-35a9-4061-b5ed-b0472309e27b
	I1101 00:09:42.262282   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.262290   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.262298   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.262310   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.262318   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.262580   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:42.314176   30593 request.go:629] Waited for 51.260114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:42.314237   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:42.314242   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.314249   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.314256   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.317908   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:42.317937   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.317948   30593 round_trippers.go:580]     Audit-Id: fa52f436-6e2b-418e-972d-6b4c1f1c0fcb
	I1101 00:09:42.317957   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.317966   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.317971   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.317976   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.317984   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.318154   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:42.514148   30593 request.go:629] Waited for 195.42483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:42.514213   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:42.514221   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.514235   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.514291   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.516991   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:42.517017   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.517026   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.517035   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.517044   30593 round_trippers.go:580]     Audit-Id: 71439942-ddcd-4159-8952-4d34c7b14582
	I1101 00:09:42.517052   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.517059   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.517068   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.517221   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:43.018410   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:43.018439   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:43.018449   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:43.018459   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:43.021587   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:43.021609   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:43.021616   30593 round_trippers.go:580]     Audit-Id: 7c4f42ca-82c7-4601-9dd3-7fa193eec32f
	I1101 00:09:43.021621   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:43.021626   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:43.021631   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:43.021636   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:43.021642   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:43.021917   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:43.022342   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:43.022357   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:43.022368   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:43.022376   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:43.025247   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:43.025262   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:43.025268   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:43.025280   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:43.025289   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:43.025298   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:43.025310   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:43.025316   30593 round_trippers.go:580]     Audit-Id: a4d1586f-de58-43b9-93f2-43b9726b8133
	I1101 00:09:43.025864   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:43.518711   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:43.518737   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:43.518746   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:43.518752   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:43.521991   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:43.522017   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:43.522027   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:43.522036   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:43.522044   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:43 GMT
	I1101 00:09:43.522058   30593 round_trippers.go:580]     Audit-Id: ee145f23-1a35-4e40-acd4-1b329858fdfd
	I1101 00:09:43.522065   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:43.522076   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:43.522321   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:43.522816   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:43.522832   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:43.522839   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:43.522845   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:43.525300   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:43.525321   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:43.525329   30593 round_trippers.go:580]     Audit-Id: a16446ac-4c9e-462b-a604-37ce52442eb5
	I1101 00:09:43.525336   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:43.525344   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:43.525351   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:43.525358   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:43.525365   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:43 GMT
	I1101 00:09:43.525589   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:44.018504   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:44.018526   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:44.018534   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:44.018539   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:44.021345   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:44.021368   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:44.021379   30593 round_trippers.go:580]     Audit-Id: 23afddaf-e391-4a40-9206-ba5a97021cd1
	I1101 00:09:44.021389   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:44.021397   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:44.021402   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:44.021408   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:44.021413   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:43 GMT
	I1101 00:09:44.021781   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:44.022178   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:44.022191   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:44.022201   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:44.022206   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:44.024358   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:44.024374   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:44.024380   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:44.024385   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:44.024390   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:44.024395   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:43 GMT
	I1101 00:09:44.024400   30593 round_trippers.go:580]     Audit-Id: 10d30ea6-f2a4-4468-b8d9-fe4d25cd5e9a
	I1101 00:09:44.024404   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:44.024539   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:44.518209   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:44.518235   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:44.518243   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:44.518249   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:44.521184   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:44.521208   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:44.521218   30593 round_trippers.go:580]     Audit-Id: fc8c6383-2699-422a-8176-ddcab44a9a9c
	I1101 00:09:44.521238   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:44.521246   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:44.521255   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:44.521264   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:44.521273   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:44 GMT
	I1101 00:09:44.521459   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:44.521894   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:44.521907   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:44.521914   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:44.521920   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:44.524063   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:44.524079   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:44.524085   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:44 GMT
	I1101 00:09:44.524135   30593 round_trippers.go:580]     Audit-Id: e14e26a5-28ca-4d3f-bae4-eea46c9e3a5b
	I1101 00:09:44.524159   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:44.524167   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:44.524177   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:44.524182   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:44.524354   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:44.524642   30593 pod_ready.go:102] pod "kube-apiserver-multinode-391061" in "kube-system" namespace has status "Ready":"False"
	I1101 00:09:45.017778   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:45.017807   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.017815   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.017822   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.021073   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:45.021103   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.021114   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.021124   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.021133   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.021142   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.021151   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:44 GMT
	I1101 00:09:45.021160   30593 round_trippers.go:580]     Audit-Id: 0dd2be34-8929-487b-8348-a144ffa6b941
	I1101 00:09:45.021400   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:45.021872   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:45.021889   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.021897   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.021908   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.024844   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:45.024865   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.024874   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.024882   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.024889   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:44 GMT
	I1101 00:09:45.024897   30593 round_trippers.go:580]     Audit-Id: db32154e-ea80-4382-b7a1-53821506f75f
	I1101 00:09:45.024905   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.024912   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.025668   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:45.518404   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:45.518429   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.518437   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.518442   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.521045   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:45.521065   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.521072   30593 round_trippers.go:580]     Audit-Id: 32e5cb3c-6d81-4568-831d-7a0dc39dbca2
	I1101 00:09:45.521077   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.521088   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.521093   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.521098   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.521103   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.521484   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1242","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7607 chars]
	I1101 00:09:45.521900   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:45.521917   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.521924   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.521929   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.524067   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:45.524082   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.524088   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.524096   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.524104   30593 round_trippers.go:580]     Audit-Id: 31736dc5-73c3-44fb-9ab2-5a9f73f0e730
	I1101 00:09:45.524113   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.524121   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.524130   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.524429   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:45.524707   30593 pod_ready.go:92] pod "kube-apiserver-multinode-391061" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:45.524722   30593 pod_ready.go:81] duration metric: took 3.267408141s waiting for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:45.524730   30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:45.524780   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-391061
	I1101 00:09:45.524789   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.524796   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.524801   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.526609   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:45.526623   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.526629   30593 round_trippers.go:580]     Audit-Id: c91e4f63-f1b9-4d99-b2a0-1ae44d4e3920
	I1101 00:09:45.526634   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.526639   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.526644   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.526649   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.526654   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.526976   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-391061","namespace":"kube-system","uid":"4775e566-6acd-43ac-b7cd-8dbd245c33cf","resourceVersion":"1240","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.mirror":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059092388Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I1101 00:09:45.527354   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:45.527366   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.527373   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.527379   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.529038   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:45.529053   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.529064   30593 round_trippers.go:580]     Audit-Id: 6d668043-98c8-4c98-9b23-07c7419995e3
	I1101 00:09:45.529069   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.529074   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.529079   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.529084   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.529089   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.529310   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:45.529599   30593 pod_ready.go:92] pod "kube-controller-manager-multinode-391061" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:45.529612   30593 pod_ready.go:81] duration metric: took 4.877104ms waiting for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:45.529629   30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:45.529698   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clsrp
	I1101 00:09:45.529709   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.529717   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.529727   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.531667   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:45.531685   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.531694   30593 round_trippers.go:580]     Audit-Id: 179e6548-b6dd-4972-8941-597dc0f20790
	I1101 00:09:45.531703   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.531718   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.531724   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.531731   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.531737   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.532195   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-clsrp","generateName":"kube-proxy-","namespace":"kube-system","uid":"a747b091-d679-4ae6-a995-c980235c9a61","resourceVersion":"1203","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
	I1101 00:09:45.713849   30593 request.go:629] Waited for 181.057235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:45.713909   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:45.713914   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.713921   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.713927   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.716619   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:45.716637   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.716643   30593 round_trippers.go:580]     Audit-Id: 426c242f-3496-4e53-8631-c1189b21932f
	I1101 00:09:45.716649   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.716657   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.716665   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.716677   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.716689   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.716889   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:45.717308   30593 pod_ready.go:92] pod "kube-proxy-clsrp" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:45.717325   30593 pod_ready.go:81] duration metric: took 187.686843ms waiting for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:45.717337   30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:45.914796   30593 request.go:629] Waited for 197.399239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
	I1101 00:09:45.914852   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
	I1101 00:09:45.914857   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.914864   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.914871   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.917416   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:45.917445   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.917454   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.917462   30593 round_trippers.go:580]     Audit-Id: 9cba40f3-3ad3-42a3-b93f-aa9cc6fc7dd3
	I1101 00:09:45.917475   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.917480   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.917486   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.917492   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.917704   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rcnv9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9","resourceVersion":"983","creationTimestamp":"2023-11-01T00:03:22Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:03:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5749 chars]
	I1101 00:09:46.114598   30593 request.go:629] Waited for 196.375687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
	I1101 00:09:46.114664   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
	I1101 00:09:46.114691   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:46.114704   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:46.114710   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:46.117340   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:46.117362   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:46.117371   30593 round_trippers.go:580]     Audit-Id: fc111c34-c570-4e3f-9832-d982a0432bc7
	I1101 00:09:46.117379   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:46.117388   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:46.117396   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:46.117408   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:46.117421   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:46 GMT
	I1101 00:09:46.117518   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061-m02","uid":"75fe164a-6fd6-4525-bacf-d792a509255b","resourceVersion":"999","creationTimestamp":"2023-11-01T00:07:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3253 chars]
	I1101 00:09:46.117775   30593 pod_ready.go:92] pod "kube-proxy-rcnv9" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:46.117792   30593 pod_ready.go:81] duration metric: took 400.44672ms waiting for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:46.117804   30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:46.314248   30593 request.go:629] Waited for 196.387545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
	I1101 00:09:46.314341   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
	I1101 00:09:46.314358   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:46.314369   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:46.314378   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:46.317400   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:46.317420   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:46.317429   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:46.317437   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:46 GMT
	I1101 00:09:46.317445   30593 round_trippers.go:580]     Audit-Id: feb64aac-545a-4487-be55-41e7c0e9ef0c
	I1101 00:09:46.317454   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:46.317463   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:46.317473   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:46.317739   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vdjh2","generateName":"kube-proxy-","namespace":"kube-system","uid":"9838a111-09e4-4975-b925-1ae5dcfa7334","resourceVersion":"1096","creationTimestamp":"2023-11-01T00:04:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1101 00:09:46.514556   30593 request.go:629] Waited for 196.355467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
	I1101 00:09:46.514623   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
	I1101 00:09:46.514630   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:46.514642   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:46.514652   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:46.517667   30593 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1101 00:09:46.517686   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:46.517695   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:46 GMT
	I1101 00:09:46.517703   30593 round_trippers.go:580]     Audit-Id: dee8bed2-39ff-4ddf-9b35-2afcacefb08c
	I1101 00:09:46.517710   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:46.517717   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:46.517725   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:46.517732   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:46.517743   30593 round_trippers.go:580]     Content-Length: 210
	I1101 00:09:46.517769   30593 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-391061-m03\" not found","reason":"NotFound","details":{"name":"multinode-391061-m03","kind":"nodes"},"code":404}
	I1101 00:09:46.517879   30593 pod_ready.go:97] node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
	I1101 00:09:46.517896   30593 pod_ready.go:81] duration metric: took 400.083902ms waiting for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:46.517909   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
	I1101 00:09:46.517918   30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:46.714359   30593 request.go:629] Waited for 196.368032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:46.714428   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:46.714439   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:46.714450   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:46.714460   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:46.717601   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:46.717622   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:46.717631   30593 round_trippers.go:580]     Audit-Id: b10ec514-fb68-4eb7-a82b-478bb7b2615a
	I1101 00:09:46.717638   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:46.717646   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:46.717653   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:46.717660   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:46.717669   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:46 GMT
	I1101 00:09:46.718240   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1187","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I1101 00:09:46.913939   30593 request.go:629] Waited for 195.310235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:46.913993   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:46.913998   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:46.914005   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:46.914018   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:46.916550   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:46.916574   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:46.916590   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:46.916598   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:46.916605   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:46.916613   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:46 GMT
	I1101 00:09:46.916622   30593 round_trippers.go:580]     Audit-Id: 3fdb3127-adb6-4b1b-973b-56d6f01c7510
	I1101 00:09:46.916635   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:46.916797   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:47.114664   30593 request.go:629] Waited for 197.399091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:47.114755   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:47.114767   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:47.114785   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:47.114799   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:47.117780   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:47.117799   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:47.117806   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:47.117812   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:47.117817   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:47.117822   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:47 GMT
	I1101 00:09:47.117827   30593 round_trippers.go:580]     Audit-Id: 88a0065a-7184-46f2-bd0b-8a0b89e70b44
	I1101 00:09:47.117841   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:47.118061   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1187","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I1101 00:09:47.313739   30593 request.go:629] Waited for 195.316992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:47.313819   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:47.313832   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:47.313850   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:47.313863   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:47.317452   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:47.317480   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:47.317490   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:47.317498   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:47.317506   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:47.317514   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:47.317522   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:47 GMT
	I1101 00:09:47.317530   30593 round_trippers.go:580]     Audit-Id: 2e316d17-f6a0-43df-b21e-ef5ee4396440
	I1101 00:09:47.317759   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:47.818890   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:47.818917   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:47.818925   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:47.818932   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:47.821524   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:47.821546   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:47.821558   30593 round_trippers.go:580]     Audit-Id: 50ab8a02-fab8-41d2-abe4-e6fa324b51f1
	I1101 00:09:47.821566   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:47.821574   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:47.821582   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:47.821590   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:47.821600   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:47 GMT
	I1101 00:09:47.822014   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1244","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I1101 00:09:47.822399   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:47.822414   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:47.822432   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:47.822440   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:47.825524   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:47.825549   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:47.825559   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:47.825568   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:47 GMT
	I1101 00:09:47.825576   30593 round_trippers.go:580]     Audit-Id: cff53b13-6010-47a4-94a7-bfaa8a544728
	I1101 00:09:47.825584   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:47.825592   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:47.825600   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:47.825781   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:47.826104   30593 pod_ready.go:92] pod "kube-scheduler-multinode-391061" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:47.826120   30593 pod_ready.go:81] duration metric: took 1.308189456s waiting for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:47.826129   30593 pod_ready.go:38] duration metric: took 9.10408386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:09:47.826150   30593 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:09:47.826195   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:47.838151   30593 command_runner.go:130] > 1704
	I1101 00:09:47.838274   30593 api_server.go:72] duration metric: took 11.499995093s to wait for apiserver process to appear ...
	I1101 00:09:47.838293   30593 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:09:47.838314   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:47.844117   30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I1101 00:09:47.844194   30593 round_trippers.go:463] GET https://192.168.39.43:8443/version
	I1101 00:09:47.844207   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:47.844218   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:47.844226   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:47.845412   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:47.845425   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:47.845431   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:47.845436   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:47.845442   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:47.845450   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:47.845463   30593 round_trippers.go:580]     Content-Length: 264
	I1101 00:09:47.845475   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:47 GMT
	I1101 00:09:47.845485   30593 round_trippers.go:580]     Audit-Id: 1468702f-2934-4914-b020-c0a4990038b1
	I1101 00:09:47.845504   30593 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1101 00:09:47.845540   30593 api_server.go:141] control plane version: v1.28.3
	I1101 00:09:47.845552   30593 api_server.go:131] duration metric: took 7.252944ms to wait for apiserver health ...
	I1101 00:09:47.845562   30593 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:09:47.913821   30593 request.go:629] Waited for 68.174041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:47.913881   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:47.913885   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:47.913893   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:47.913899   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:47.918202   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:47.918230   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:47.918239   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:47.918248   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:47.918254   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:47 GMT
	I1101 00:09:47.918259   30593 round_trippers.go:580]     Audit-Id: b30ccebe-8256-4a7d-a462-7b4e1d0cdfa8
	I1101 00:09:47.918264   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:47.918269   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:47.920031   30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1232","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83346 chars]
	I1101 00:09:47.922413   30593 system_pods.go:59] 12 kube-system pods found
	I1101 00:09:47.922434   30593 system_pods.go:61] "coredns-5dd5756b68-dg5w7" [eb94555e-1465-4dec-9d6d-ebcbec02841e] Running
	I1101 00:09:47.922438   30593 system_pods.go:61] "etcd-multinode-391061" [0537cc4c-2127-4424-b02f-9e4747bc8713] Running
	I1101 00:09:47.922442   30593 system_pods.go:61] "kindnet-4jfj9" [2559e20b-85cf-43d5-8663-7ec855d71df9] Running
	I1101 00:09:47.922446   30593 system_pods.go:61] "kindnet-lcljq" [171d5f22-d781-4224-88f7-f940ad9e747b] Running
	I1101 00:09:47.922450   30593 system_pods.go:61] "kindnet-wrdhd" [85db010e-82bd-4efa-a760-0669bf1e52de] Running
	I1101 00:09:47.922454   30593 system_pods.go:61] "kube-apiserver-multinode-391061" [dff82899-3db2-46a2-aea0-ec57d58be1c8] Running
	I1101 00:09:47.922458   30593 system_pods.go:61] "kube-controller-manager-multinode-391061" [4775e566-6acd-43ac-b7cd-8dbd245c33cf] Running
	I1101 00:09:47.922462   30593 system_pods.go:61] "kube-proxy-clsrp" [a747b091-d679-4ae6-a995-c980235c9a61] Running
	I1101 00:09:47.922465   30593 system_pods.go:61] "kube-proxy-rcnv9" [9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9] Running
	I1101 00:09:47.922476   30593 system_pods.go:61] "kube-proxy-vdjh2" [9838a111-09e4-4975-b925-1ae5dcfa7334] Running
	I1101 00:09:47.922481   30593 system_pods.go:61] "kube-scheduler-multinode-391061" [eaf767ff-8f68-4b91-bcd7-b550481a6155] Running
	I1101 00:09:47.922485   30593 system_pods.go:61] "storage-provisioner" [b0b970e9-7d0b-4e94-8ca8-2f3348eaf579] Running
	I1101 00:09:47.922492   30593 system_pods.go:74] duration metric: took 76.924582ms to wait for pod list to return data ...
	I1101 00:09:47.922513   30593 default_sa.go:34] waiting for default service account to be created ...
	I1101 00:09:48.113860   30593 request.go:629] Waited for 191.269729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
	I1101 00:09:48.113931   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
	I1101 00:09:48.113936   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:48.113943   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:48.113949   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:48.117152   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:48.117173   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:48.117179   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:48.117184   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:48.117189   30593 round_trippers.go:580]     Content-Length: 262
	I1101 00:09:48.117194   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:48 GMT
	I1101 00:09:48.117199   30593 round_trippers.go:580]     Audit-Id: cf19f0f1-599a-4c01-a817-75c7ba89021a
	I1101 00:09:48.117204   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:48.117209   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:48.117226   30593 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"331ecfcc-8852-4250-85c2-da77e5b314fe","resourceVersion":"364","creationTimestamp":"2023-11-01T00:02:33Z"}}]}
	I1101 00:09:48.117391   30593 default_sa.go:45] found service account: "default"
	I1101 00:09:48.117408   30593 default_sa.go:55] duration metric: took 194.889894ms for default service account to be created ...
	I1101 00:09:48.117415   30593 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 00:09:48.313818   30593 request.go:629] Waited for 196.325558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:48.313881   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:48.313886   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:48.313893   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:48.313899   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:48.317985   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:48.318004   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:48.318011   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:48.318018   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:48 GMT
	I1101 00:09:48.318027   30593 round_trippers.go:580]     Audit-Id: 7b682312-a373-4aac-a928-19f0e9f08ce4
	I1101 00:09:48.318035   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:48.318042   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:48.318051   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:48.319258   30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1232","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83346 chars]
	I1101 00:09:48.321698   30593 system_pods.go:86] 12 kube-system pods found
	I1101 00:09:48.321724   30593 system_pods.go:89] "coredns-5dd5756b68-dg5w7" [eb94555e-1465-4dec-9d6d-ebcbec02841e] Running
	I1101 00:09:48.321729   30593 system_pods.go:89] "etcd-multinode-391061" [0537cc4c-2127-4424-b02f-9e4747bc8713] Running
	I1101 00:09:48.321733   30593 system_pods.go:89] "kindnet-4jfj9" [2559e20b-85cf-43d5-8663-7ec855d71df9] Running
	I1101 00:09:48.321739   30593 system_pods.go:89] "kindnet-lcljq" [171d5f22-d781-4224-88f7-f940ad9e747b] Running
	I1101 00:09:48.321743   30593 system_pods.go:89] "kindnet-wrdhd" [85db010e-82bd-4efa-a760-0669bf1e52de] Running
	I1101 00:09:48.321747   30593 system_pods.go:89] "kube-apiserver-multinode-391061" [dff82899-3db2-46a2-aea0-ec57d58be1c8] Running
	I1101 00:09:48.321752   30593 system_pods.go:89] "kube-controller-manager-multinode-391061" [4775e566-6acd-43ac-b7cd-8dbd245c33cf] Running
	I1101 00:09:48.321756   30593 system_pods.go:89] "kube-proxy-clsrp" [a747b091-d679-4ae6-a995-c980235c9a61] Running
	I1101 00:09:48.321762   30593 system_pods.go:89] "kube-proxy-rcnv9" [9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9] Running
	I1101 00:09:48.321765   30593 system_pods.go:89] "kube-proxy-vdjh2" [9838a111-09e4-4975-b925-1ae5dcfa7334] Running
	I1101 00:09:48.321772   30593 system_pods.go:89] "kube-scheduler-multinode-391061" [eaf767ff-8f68-4b91-bcd7-b550481a6155] Running
	I1101 00:09:48.321777   30593 system_pods.go:89] "storage-provisioner" [b0b970e9-7d0b-4e94-8ca8-2f3348eaf579] Running
	I1101 00:09:48.321785   30593 system_pods.go:126] duration metric: took 204.365858ms to wait for k8s-apps to be running ...
	I1101 00:09:48.321794   30593 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 00:09:48.321835   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:09:48.334581   30593 system_svc.go:56] duration metric: took 12.775415ms WaitForService to wait for kubelet.
	I1101 00:09:48.334608   30593 kubeadm.go:581] duration metric: took 11.996332779s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 00:09:48.334634   30593 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:09:48.514065   30593 request.go:629] Waited for 179.367734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes
	I1101 00:09:48.514131   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes
	I1101 00:09:48.514136   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:48.514144   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:48.514150   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:48.517017   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:48.517036   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:48.517043   30593 round_trippers.go:580]     Audit-Id: acbda546-1395-4e94-a808-39a73ef2e8e6
	I1101 00:09:48.517057   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:48.517063   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:48.517070   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:48.517077   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:48.517087   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:48 GMT
	I1101 00:09:48.517358   30593 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 9463 chars]
	I1101 00:09:48.517853   30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:09:48.517873   30593 node_conditions.go:123] node cpu capacity is 2
	I1101 00:09:48.517883   30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:09:48.517888   30593 node_conditions.go:123] node cpu capacity is 2
	I1101 00:09:48.517892   30593 node_conditions.go:105] duration metric: took 183.255117ms to run NodePressure ...
	I1101 00:09:48.517902   30593 start.go:228] waiting for startup goroutines ...
	I1101 00:09:48.517913   30593 start.go:233] waiting for cluster config update ...
	I1101 00:09:48.517918   30593 start.go:242] writing updated cluster config ...
	I1101 00:09:48.518328   30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:09:48.518400   30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
	I1101 00:09:48.521532   30593 out.go:177] * Starting worker node multinode-391061-m02 in cluster multinode-391061
	I1101 00:09:48.522898   30593 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1101 00:09:48.522933   30593 cache.go:56] Caching tarball of preloaded images
	I1101 00:09:48.523028   30593 preload.go:174] Found /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 00:09:48.523039   30593 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1101 00:09:48.523130   30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
	I1101 00:09:48.523306   30593 start.go:365] acquiring machines lock for multinode-391061-m02: {Name:mkd250049361a5d831a3d31c273569334737e54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:09:48.523347   30593 start.go:369] acquired machines lock for "multinode-391061-m02" in 23.277µs
	I1101 00:09:48.523360   30593 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:09:48.523365   30593 fix.go:54] fixHost starting: m02
	I1101 00:09:48.523626   30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:09:48.523657   30593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:09:48.538023   30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I1101 00:09:48.538553   30593 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:09:48.539008   30593 main.go:141] libmachine: Using API Version  1
	I1101 00:09:48.539038   30593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:09:48.539380   30593 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:09:48.539558   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:09:48.539763   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetState
	I1101 00:09:48.541362   30593 fix.go:102] recreateIfNeeded on multinode-391061-m02: state=Stopped err=<nil>
	I1101 00:09:48.541381   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	W1101 00:09:48.541559   30593 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:09:48.543776   30593 out.go:177] * Restarting existing kvm2 VM for "multinode-391061-m02" ...
	I1101 00:09:48.545357   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .Start
	I1101 00:09:48.545519   30593 main.go:141] libmachine: (multinode-391061-m02) Ensuring networks are active...
	I1101 00:09:48.546142   30593 main.go:141] libmachine: (multinode-391061-m02) Ensuring network default is active
	I1101 00:09:48.546521   30593 main.go:141] libmachine: (multinode-391061-m02) Ensuring network mk-multinode-391061 is active
	I1101 00:09:48.546910   30593 main.go:141] libmachine: (multinode-391061-m02) Getting domain xml...
	I1101 00:09:48.547503   30593 main.go:141] libmachine: (multinode-391061-m02) Creating domain...
	I1101 00:09:49.771823   30593 main.go:141] libmachine: (multinode-391061-m02) Waiting to get IP...
	I1101 00:09:49.772640   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:49.773071   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:49.773175   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:49.773074   30847 retry.go:31] will retry after 274.263244ms: waiting for machine to come up
	I1101 00:09:50.048692   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:50.049124   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:50.049162   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:50.049076   30847 retry.go:31] will retry after 372.692246ms: waiting for machine to come up
	I1101 00:09:50.423723   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:50.424163   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:50.424198   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:50.424109   30847 retry.go:31] will retry after 328.806363ms: waiting for machine to come up
	I1101 00:09:50.754813   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:50.755280   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:50.755299   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:50.755254   30847 retry.go:31] will retry after 486.547371ms: waiting for machine to come up
	I1101 00:09:51.243022   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:51.243428   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:51.243451   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:51.243379   30847 retry.go:31] will retry after 524.248371ms: waiting for machine to come up
	I1101 00:09:51.769198   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:51.769648   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:51.769689   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:51.769606   30847 retry.go:31] will retry after 931.47967ms: waiting for machine to come up
	I1101 00:09:52.703177   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:52.703627   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:52.703656   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:52.703550   30847 retry.go:31] will retry after 962.96473ms: waiting for machine to come up
	I1101 00:09:53.668096   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:53.668562   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:53.668584   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:53.668516   30847 retry.go:31] will retry after 926.464487ms: waiting for machine to come up
	I1101 00:09:54.596589   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:54.596929   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:54.596953   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:54.596883   30847 retry.go:31] will retry after 1.199020855s: waiting for machine to come up
	I1101 00:09:55.797189   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:55.797717   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:55.797748   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:55.797665   30847 retry.go:31] will retry after 1.98043569s: waiting for machine to come up
	I1101 00:09:57.780876   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:57.781471   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:57.781502   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:57.781409   30847 retry.go:31] will retry after 2.601288069s: waiting for machine to come up
	I1101 00:10:00.385745   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:00.386332   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:10:00.386369   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:10:00.386242   30847 retry.go:31] will retry after 2.239008923s: waiting for machine to come up
	I1101 00:10:02.627577   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:02.627955   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:10:02.627983   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:10:02.627920   30847 retry.go:31] will retry after 3.415765053s: waiting for machine to come up
	I1101 00:10:06.046739   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.047249   30593 main.go:141] libmachine: (multinode-391061-m02) Found IP for machine: 192.168.39.249
	I1101 00:10:06.047290   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has current primary IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.047305   30593 main.go:141] libmachine: (multinode-391061-m02) Reserving static IP address...
	I1101 00:10:06.047763   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "multinode-391061-m02", mac: "52:54:00:f1:1a:84", ip: "192.168.39.249"} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.047790   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | skip adding static IP to network mk-multinode-391061 - found existing host DHCP lease matching {name: "multinode-391061-m02", mac: "52:54:00:f1:1a:84", ip: "192.168.39.249"}
	I1101 00:10:06.047800   30593 main.go:141] libmachine: (multinode-391061-m02) Reserved static IP address: 192.168.39.249
	I1101 00:10:06.047814   30593 main.go:141] libmachine: (multinode-391061-m02) Waiting for SSH to be available...
	I1101 00:10:06.047824   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | Getting to WaitForSSH function...
	I1101 00:10:06.049673   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.050046   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.050081   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.050222   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | Using SSH client type: external
	I1101 00:10:06.050261   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa (-rw-------)
	I1101 00:10:06.050300   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 00:10:06.050322   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | About to run SSH command:
	I1101 00:10:06.050339   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | exit 0
	I1101 00:10:06.146337   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | SSH cmd err, output: <nil>: 
	I1101 00:10:06.146696   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetConfigRaw
	I1101 00:10:06.147450   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
	I1101 00:10:06.149870   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.150236   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.150267   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.150541   30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
	I1101 00:10:06.150763   30593 machine.go:88] provisioning docker machine ...
	I1101 00:10:06.150786   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:06.150984   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetMachineName
	I1101 00:10:06.151140   30593 buildroot.go:166] provisioning hostname "multinode-391061-m02"
	I1101 00:10:06.151161   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetMachineName
	I1101 00:10:06.151315   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:06.153372   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.153742   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.153790   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.153926   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:06.154158   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.154347   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.154535   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:06.154739   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:10:06.155162   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1101 00:10:06.155179   30593 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-391061-m02 && echo "multinode-391061-m02" | sudo tee /etc/hostname
	I1101 00:10:06.302682   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-391061-m02
	
	I1101 00:10:06.302715   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:06.305443   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.305857   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.305883   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.306094   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:06.306306   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.306521   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.306659   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:06.306805   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:10:06.307269   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1101 00:10:06.307298   30593 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-391061-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-391061-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-391061-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:10:06.448087   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:10:06.448122   30593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7251/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7251/.minikube}
	I1101 00:10:06.448143   30593 buildroot.go:174] setting up certificates
	I1101 00:10:06.448153   30593 provision.go:83] configureAuth start
	I1101 00:10:06.448163   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetMachineName
	I1101 00:10:06.448466   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
	I1101 00:10:06.451196   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.451596   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.451627   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.451812   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:06.453965   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.454286   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.454315   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.454535   30593 provision.go:138] copyHostCerts
	I1101 00:10:06.454570   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
	I1101 00:10:06.454601   30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem, removing ...
	I1101 00:10:06.454610   30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
	I1101 00:10:06.454674   30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem (1082 bytes)
	I1101 00:10:06.454748   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
	I1101 00:10:06.454767   30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem, removing ...
	I1101 00:10:06.454773   30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
	I1101 00:10:06.454796   30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem (1123 bytes)
	I1101 00:10:06.454836   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
	I1101 00:10:06.454852   30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem, removing ...
	I1101 00:10:06.454858   30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
	I1101 00:10:06.454876   30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem (1675 bytes)
	I1101 00:10:06.454920   30593 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem org=jenkins.multinode-391061-m02 san=[192.168.39.249 192.168.39.249 localhost 127.0.0.1 minikube multinode-391061-m02]
	I1101 00:10:06.568585   30593 provision.go:172] copyRemoteCerts
	I1101 00:10:06.568638   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:10:06.568659   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:06.571150   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.571450   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.571479   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.571687   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:06.571874   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.572047   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:06.572186   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
	I1101 00:10:06.667838   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 00:10:06.667924   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 00:10:06.689930   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 00:10:06.689995   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1101 00:10:06.712213   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 00:10:06.712292   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:10:06.733879   30593 provision.go:86] duration metric: configureAuth took 285.714663ms
	I1101 00:10:06.733904   30593 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:10:06.734094   30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:10:06.734113   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:06.734377   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:06.736917   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.737314   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.737348   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.737503   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:06.737692   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.737870   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.738014   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:06.738189   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:10:06.738528   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1101 00:10:06.738541   30593 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 00:10:06.871826   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1101 00:10:06.871854   30593 buildroot.go:70] root file system type: tmpfs
	I1101 00:10:06.872006   30593 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 00:10:06.872036   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:06.874568   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.874916   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.874940   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.875118   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:06.875315   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.875468   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.875569   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:06.875698   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:10:06.876002   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1101 00:10:06.876075   30593 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.43"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 00:10:07.020165   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.43
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 00:10:07.020194   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:07.022769   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:07.023132   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:07.023159   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:07.023341   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:07.023522   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:07.023707   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:07.023843   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:07.023996   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:10:07.024324   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1101 00:10:07.024341   30593 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 00:10:07.865650   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1101 00:10:07.865678   30593 machine.go:91] provisioned docker machine in 1.714900545s
	I1101 00:10:07.865693   30593 start.go:300] post-start starting for "multinode-391061-m02" (driver="kvm2")
	I1101 00:10:07.865707   30593 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:10:07.865730   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:07.866051   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:10:07.866082   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:07.868728   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:07.869111   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:07.869135   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:07.869295   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:07.869516   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:07.869672   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:07.869814   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
	I1101 00:10:07.964822   30593 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:10:07.968645   30593 command_runner.go:130] > NAME=Buildroot
	I1101 00:10:07.968665   30593 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
	I1101 00:10:07.968672   30593 command_runner.go:130] > ID=buildroot
	I1101 00:10:07.968681   30593 command_runner.go:130] > VERSION_ID=2021.02.12
	I1101 00:10:07.968687   30593 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1101 00:10:07.968778   30593 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:10:07.968802   30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/addons for local assets ...
	I1101 00:10:07.968861   30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/files for local assets ...
	I1101 00:10:07.968928   30593 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> 144632.pem in /etc/ssl/certs
	I1101 00:10:07.968937   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> /etc/ssl/certs/144632.pem
	I1101 00:10:07.969013   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:10:07.978134   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:10:07.999912   30593 start.go:303] post-start completed in 134.20357ms
	I1101 00:10:07.999936   30593 fix.go:56] fixHost completed within 19.476570148s
	I1101 00:10:07.999956   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:08.002715   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.003077   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:08.003109   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.003255   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:08.003478   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:08.003658   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:08.003796   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:08.003977   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:10:08.004287   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1101 00:10:08.004297   30593 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1101 00:10:08.139625   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698797408.091239350
	
	I1101 00:10:08.139661   30593 fix.go:206] guest clock: 1698797408.091239350
	I1101 00:10:08.139672   30593 fix.go:219] Guest: 2023-11-01 00:10:08.09123935 +0000 UTC Remote: 2023-11-01 00:10:07.999939094 +0000 UTC m=+78.350442936 (delta=91.300256ms)
	I1101 00:10:08.139692   30593 fix.go:190] guest clock delta is within tolerance: 91.300256ms
	I1101 00:10:08.139699   30593 start.go:83] releasing machines lock for "multinode-391061-m02", held for 19.616342127s
	I1101 00:10:08.139723   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:08.140075   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
	I1101 00:10:08.142846   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.143203   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:08.143246   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.145734   30593 out.go:177] * Found network options:
	I1101 00:10:08.147426   30593 out.go:177]   - NO_PROXY=192.168.39.43
	W1101 00:10:08.148945   30593 proxy.go:119] fail to check proxy env: Error ip not in block
	I1101 00:10:08.148990   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:08.149744   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:08.149992   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:08.150087   30593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:10:08.150122   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	W1101 00:10:08.150204   30593 proxy.go:119] fail to check proxy env: Error ip not in block
	I1101 00:10:08.150272   30593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 00:10:08.150293   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:08.153130   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.153377   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.153609   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:08.153633   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.153818   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:08.153840   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.153853   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:08.154005   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:08.154068   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:08.154141   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:08.154205   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:08.154260   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:08.154322   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
	I1101 00:10:08.154355   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
	I1101 00:10:08.266696   30593 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1101 00:10:08.266764   30593 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1101 00:10:08.266798   30593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:10:08.266854   30593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:10:08.282630   30593 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1101 00:10:08.282695   30593 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 00:10:08.282708   30593 start.go:472] detecting cgroup driver to use...
	I1101 00:10:08.282848   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:10:08.299593   30593 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1101 00:10:08.299879   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1101 00:10:08.309962   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1101 00:10:08.319802   30593 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1101 00:10:08.319855   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1101 00:10:08.329984   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:10:08.340324   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1101 00:10:08.350388   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:10:08.360362   30593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:10:08.370630   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1101 00:10:08.380841   30593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:10:08.389848   30593 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1101 00:10:08.389933   30593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:10:08.398827   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:10:08.509909   30593 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 00:10:08.527202   30593 start.go:472] detecting cgroup driver to use...
	I1101 00:10:08.527267   30593 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 00:10:08.539911   30593 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1101 00:10:08.540831   30593 command_runner.go:130] > [Unit]
	I1101 00:10:08.540847   30593 command_runner.go:130] > Description=Docker Application Container Engine
	I1101 00:10:08.540853   30593 command_runner.go:130] > Documentation=https://docs.docker.com
	I1101 00:10:08.540859   30593 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1101 00:10:08.540864   30593 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1101 00:10:08.540873   30593 command_runner.go:130] > StartLimitBurst=3
	I1101 00:10:08.540880   30593 command_runner.go:130] > StartLimitIntervalSec=60
	I1101 00:10:08.540884   30593 command_runner.go:130] > [Service]
	I1101 00:10:08.540890   30593 command_runner.go:130] > Type=notify
	I1101 00:10:08.540899   30593 command_runner.go:130] > Restart=on-failure
	I1101 00:10:08.540906   30593 command_runner.go:130] > Environment=NO_PROXY=192.168.39.43
	I1101 00:10:08.540915   30593 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1101 00:10:08.540932   30593 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1101 00:10:08.540943   30593 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1101 00:10:08.540952   30593 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1101 00:10:08.540961   30593 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1101 00:10:08.540970   30593 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1101 00:10:08.540980   30593 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1101 00:10:08.540993   30593 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1101 00:10:08.541002   30593 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1101 00:10:08.541009   30593 command_runner.go:130] > ExecStart=
	I1101 00:10:08.541024   30593 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1101 00:10:08.541035   30593 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1101 00:10:08.541042   30593 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1101 00:10:08.541051   30593 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1101 00:10:08.541057   30593 command_runner.go:130] > LimitNOFILE=infinity
	I1101 00:10:08.541062   30593 command_runner.go:130] > LimitNPROC=infinity
	I1101 00:10:08.541066   30593 command_runner.go:130] > LimitCORE=infinity
	I1101 00:10:08.541073   30593 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1101 00:10:08.541080   30593 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1101 00:10:08.541087   30593 command_runner.go:130] > TasksMax=infinity
	I1101 00:10:08.541091   30593 command_runner.go:130] > TimeoutStartSec=0
	I1101 00:10:08.541100   30593 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1101 00:10:08.541106   30593 command_runner.go:130] > Delegate=yes
	I1101 00:10:08.541112   30593 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1101 00:10:08.541122   30593 command_runner.go:130] > KillMode=process
	I1101 00:10:08.541128   30593 command_runner.go:130] > [Install]
	I1101 00:10:08.541133   30593 command_runner.go:130] > WantedBy=multi-user.target
	I1101 00:10:08.541558   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:10:08.556173   30593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:10:08.575016   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:10:08.587990   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:10:08.601691   30593 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 00:10:08.631342   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:10:08.644194   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:10:08.661548   30593 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1101 00:10:08.662099   30593 ssh_runner.go:195] Run: which cri-dockerd
	I1101 00:10:08.665592   30593 command_runner.go:130] > /usr/bin/cri-dockerd
	I1101 00:10:08.665782   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1101 00:10:08.674228   30593 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1101 00:10:08.690202   30593 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 00:10:08.793665   30593 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 00:10:08.913029   30593 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1101 00:10:08.913074   30593 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1101 00:10:08.928591   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:10:09.029624   30593 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 00:10:10.439233   30593 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.409560046s)
	I1101 00:10:10.439309   30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:10:10.540266   30593 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1101 00:10:10.657292   30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:10:10.768655   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:10:10.871570   30593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1101 00:10:10.887421   30593 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1101 00:10:10.889772   30593 out.go:177] 
	W1101 00:10:10.891480   30593 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W1101 00:10:10.891500   30593 out.go:239] * 
	* 
	W1101 00:10:10.892409   30593 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 00:10:10.894220   30593 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-391061 --wait=true -v=8 --alsologtostderr --driver=kvm2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-391061 -n multinode-391061
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-391061 logs -n 25: (1.289304328s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| cp      | multinode-391061 cp multinode-391061-m02:/home/docker/cp-test.txt                       | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | multinode-391061:/home/docker/cp-test_multinode-391061-m02_multinode-391061.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-391061 ssh -n                                                                 | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | multinode-391061-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-391061 ssh -n multinode-391061 sudo cat                                       | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | /home/docker/cp-test_multinode-391061-m02_multinode-391061.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-391061 cp multinode-391061-m02:/home/docker/cp-test.txt                       | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | multinode-391061-m03:/home/docker/cp-test_multinode-391061-m02_multinode-391061-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-391061 ssh -n                                                                 | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | multinode-391061-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-391061 ssh -n multinode-391061-m03 sudo cat                                   | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | /home/docker/cp-test_multinode-391061-m02_multinode-391061-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-391061 cp testdata/cp-test.txt                                                | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | multinode-391061-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-391061 ssh -n                                                                 | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | multinode-391061-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-391061 cp multinode-391061-m03:/home/docker/cp-test.txt                       | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile415772365/001/cp-test_multinode-391061-m03.txt          |                  |         |                |                     |                     |
	| ssh     | multinode-391061 ssh -n                                                                 | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | multinode-391061-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-391061 cp multinode-391061-m03:/home/docker/cp-test.txt                       | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | multinode-391061:/home/docker/cp-test_multinode-391061-m03_multinode-391061.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-391061 ssh -n                                                                 | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | multinode-391061-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-391061 ssh -n multinode-391061 sudo cat                                       | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | /home/docker/cp-test_multinode-391061-m03_multinode-391061.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-391061 cp multinode-391061-m03:/home/docker/cp-test.txt                       | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | multinode-391061-m02:/home/docker/cp-test_multinode-391061-m03_multinode-391061-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-391061 ssh -n                                                                 | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | multinode-391061-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-391061 ssh -n multinode-391061-m02 sudo cat                                   | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	|         | /home/docker/cp-test_multinode-391061-m03_multinode-391061-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-391061 node stop m03                                                          | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:04 UTC |
	| node    | multinode-391061 node start                                                             | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:04 UTC | 01 Nov 23 00:05 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |                |                     |                     |
	| node    | list -p multinode-391061                                                                | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:05 UTC |                     |
	| stop    | -p multinode-391061                                                                     | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:05 UTC | 01 Nov 23 00:05 UTC |
	| start   | -p multinode-391061                                                                     | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:05 UTC | 01 Nov 23 00:08 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-391061                                                                | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:08 UTC |                     |
	| node    | multinode-391061 node delete                                                            | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:08 UTC | 01 Nov 23 00:08 UTC |
	|         | m03                                                                                     |                  |         |                |                     |                     |
	| stop    | multinode-391061 stop                                                                   | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:08 UTC | 01 Nov 23 00:08 UTC |
	| start   | -p multinode-391061                                                                     | multinode-391061 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:08 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	|         | --driver=kvm2                                                                           |                  |         |                |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:08:49
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:08:49.696747   30593 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:08:49.696976   30593 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:08:49.696984   30593 out.go:309] Setting ErrFile to fd 2...
	I1101 00:08:49.696989   30593 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:08:49.697199   30593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
	I1101 00:08:49.697724   30593 out.go:303] Setting JSON to false
	I1101 00:08:49.698581   30593 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3079,"bootTime":1698794251,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:08:49.698643   30593 start.go:138] virtualization: kvm guest
	I1101 00:08:49.701257   30593 out.go:177] * [multinode-391061] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:08:49.702839   30593 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:08:49.702844   30593 notify.go:220] Checking for updates...
	I1101 00:08:49.704612   30593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:08:49.706320   30593 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:08:49.707852   30593 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	I1101 00:08:49.709325   30593 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:08:49.710727   30593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:08:49.712746   30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:08:49.713116   30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:08:49.713162   30593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:49.727252   30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I1101 00:08:49.727584   30593 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:49.728056   30593 main.go:141] libmachine: Using API Version  1
	I1101 00:08:49.728075   30593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:49.728412   30593 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:49.728601   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:08:49.728809   30593 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:08:49.729119   30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:08:49.729158   30593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:49.742929   30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I1101 00:08:49.743302   30593 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:49.743756   30593 main.go:141] libmachine: Using API Version  1
	I1101 00:08:49.743779   30593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:49.744063   30593 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:49.744234   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:08:49.779391   30593 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:08:49.780999   30593 start.go:298] selected driver: kvm2
	I1101 00:08:49.781015   30593 start.go:902] validating driver "kvm2" against &{Name:multinode-391061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:multinode-391061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false k
ubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMn
etPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:08:49.781172   30593 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:08:49.781470   30593 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:08:49.781541   30593 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7251/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:08:49.796518   30593 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:08:49.797197   30593 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:08:49.797254   30593 cni.go:84] Creating CNI manager for ""
	I1101 00:08:49.797263   30593 cni.go:136] 2 nodes found, recommending kindnet
	I1101 00:08:49.797274   30593 start_flags.go:323] config:
	{Name:multinode-391061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-391061 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:fals
e nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:08:49.797449   30593 iso.go:125] acquiring lock: {Name:mk56e0e42e3cb427bae1fd4521b75db693021ac1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:08:49.799445   30593 out.go:177] * Starting control plane node multinode-391061 in cluster multinode-391061
	I1101 00:08:49.802107   30593 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1101 00:08:49.802154   30593 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1101 00:08:49.802163   30593 cache.go:56] Caching tarball of preloaded images
	I1101 00:08:49.802239   30593 preload.go:174] Found /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 00:08:49.802251   30593 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1101 00:08:49.802383   30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
	I1101 00:08:49.802605   30593 start.go:365] acquiring machines lock for multinode-391061: {Name:mkd250049361a5d831a3d31c273569334737e54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:08:49.802660   30593 start.go:369] acquired machines lock for "multinode-391061" in 32.142µs
	I1101 00:08:49.802683   30593 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:08:49.802692   30593 fix.go:54] fixHost starting: 
	I1101 00:08:49.802950   30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:08:49.802988   30593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:49.817041   30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I1101 00:08:49.817426   30593 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:49.817852   30593 main.go:141] libmachine: Using API Version  1
	I1101 00:08:49.817876   30593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:49.818147   30593 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:49.818268   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:08:49.818364   30593 main.go:141] libmachine: (multinode-391061) Calling .GetState
	I1101 00:08:49.819780   30593 fix.go:102] recreateIfNeeded on multinode-391061: state=Stopped err=<nil>
	I1101 00:08:49.819798   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	W1101 00:08:49.819945   30593 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:08:49.822198   30593 out.go:177] * Restarting existing kvm2 VM for "multinode-391061" ...
	I1101 00:08:49.823675   30593 main.go:141] libmachine: (multinode-391061) Calling .Start
	I1101 00:08:49.823836   30593 main.go:141] libmachine: (multinode-391061) Ensuring networks are active...
	I1101 00:08:49.824527   30593 main.go:141] libmachine: (multinode-391061) Ensuring network default is active
	I1101 00:08:49.824903   30593 main.go:141] libmachine: (multinode-391061) Ensuring network mk-multinode-391061 is active
	I1101 00:08:49.825231   30593 main.go:141] libmachine: (multinode-391061) Getting domain xml...
	I1101 00:08:49.825825   30593 main.go:141] libmachine: (multinode-391061) Creating domain...
	I1101 00:08:51.072133   30593 main.go:141] libmachine: (multinode-391061) Waiting to get IP...
	I1101 00:08:51.072978   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:51.073561   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:51.073673   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.073534   30629 retry.go:31] will retry after 229.675258ms: waiting for machine to come up
	I1101 00:08:51.305068   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:51.305486   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:51.305513   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.305442   30629 retry.go:31] will retry after 372.862383ms: waiting for machine to come up
	I1101 00:08:51.680135   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:51.680628   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:51.680663   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.680610   30629 retry.go:31] will retry after 314.755115ms: waiting for machine to come up
	I1101 00:08:51.997095   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:51.997485   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:51.997516   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:51.997452   30629 retry.go:31] will retry after 376.70772ms: waiting for machine to come up
	I1101 00:08:52.376191   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:52.376728   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:52.376768   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:52.376689   30629 retry.go:31] will retry after 583.291159ms: waiting for machine to come up
	I1101 00:08:52.961471   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:52.961889   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:52.961920   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:52.961826   30629 retry.go:31] will retry after 803.566491ms: waiting for machine to come up
	I1101 00:08:53.766791   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:53.767211   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:53.767251   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:53.767153   30629 retry.go:31] will retry after 1.032833525s: waiting for machine to come up
	I1101 00:08:54.801328   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:54.801700   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:54.801734   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:54.801656   30629 retry.go:31] will retry after 1.044435025s: waiting for machine to come up
	I1101 00:08:55.847409   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:55.847850   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:55.847874   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:55.847797   30629 retry.go:31] will retry after 1.41464542s: waiting for machine to come up
	I1101 00:08:57.264298   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:57.264621   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:57.264658   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:57.264585   30629 retry.go:31] will retry after 1.783339985s: waiting for machine to come up
	I1101 00:08:59.050737   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:08:59.051258   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:08:59.051280   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:08:59.051209   30629 retry.go:31] will retry after 2.24727828s: waiting for machine to come up
	I1101 00:09:01.300675   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:01.301123   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:09:01.301147   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:09:01.301080   30629 retry.go:31] will retry after 2.659318668s: waiting for machine to come up
	I1101 00:09:03.964050   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:03.964412   30593 main.go:141] libmachine: (multinode-391061) DBG | unable to find current IP address of domain multinode-391061 in network mk-multinode-391061
	I1101 00:09:03.964433   30593 main.go:141] libmachine: (multinode-391061) DBG | I1101 00:09:03.964369   30629 retry.go:31] will retry after 4.002549509s: waiting for machine to come up
	I1101 00:09:07.970570   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:07.970947   30593 main.go:141] libmachine: (multinode-391061) Found IP for machine: 192.168.39.43
	I1101 00:09:07.970973   30593 main.go:141] libmachine: (multinode-391061) Reserving static IP address...
	I1101 00:09:07.970988   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has current primary IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:07.971417   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "multinode-391061", mac: "52:54:00:b9:c2:69", ip: "192.168.39.43"} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:07.971446   30593 main.go:141] libmachine: (multinode-391061) DBG | skip adding static IP to network mk-multinode-391061 - found existing host DHCP lease matching {name: "multinode-391061", mac: "52:54:00:b9:c2:69", ip: "192.168.39.43"}
	I1101 00:09:07.971454   30593 main.go:141] libmachine: (multinode-391061) Reserved static IP address: 192.168.39.43
	I1101 00:09:07.971463   30593 main.go:141] libmachine: (multinode-391061) Waiting for SSH to be available...
	I1101 00:09:07.971472   30593 main.go:141] libmachine: (multinode-391061) DBG | Getting to WaitForSSH function...
	I1101 00:09:07.973244   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:07.973598   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:07.973629   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:07.973785   30593 main.go:141] libmachine: (multinode-391061) DBG | Using SSH client type: external
	I1101 00:09:07.973815   30593 main.go:141] libmachine: (multinode-391061) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa (-rw-------)
	I1101 00:09:07.973859   30593 main.go:141] libmachine: (multinode-391061) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 00:09:07.973884   30593 main.go:141] libmachine: (multinode-391061) DBG | About to run SSH command:
	I1101 00:09:07.973895   30593 main.go:141] libmachine: (multinode-391061) DBG | exit 0
	I1101 00:09:08.070105   30593 main.go:141] libmachine: (multinode-391061) DBG | SSH cmd err, output: <nil>: 
	I1101 00:09:08.070483   30593 main.go:141] libmachine: (multinode-391061) Calling .GetConfigRaw
	I1101 00:09:08.071216   30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
	I1101 00:09:08.073614   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.074025   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.074060   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.074285   30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
	I1101 00:09:08.074479   30593 machine.go:88] provisioning docker machine ...
	I1101 00:09:08.074512   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:08.074714   30593 main.go:141] libmachine: (multinode-391061) Calling .GetMachineName
	I1101 00:09:08.074856   30593 buildroot.go:166] provisioning hostname "multinode-391061"
	I1101 00:09:08.074870   30593 main.go:141] libmachine: (multinode-391061) Calling .GetMachineName
	I1101 00:09:08.074990   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.077098   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.077410   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.077452   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.077575   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:08.077739   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.077899   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.078007   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:08.078153   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:08.078494   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1101 00:09:08.078529   30593 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-391061 && echo "multinode-391061" | sudo tee /etc/hostname
	I1101 00:09:08.217944   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-391061
	
	I1101 00:09:08.217967   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.220671   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.220963   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.221024   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.221089   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:08.221295   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.221466   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.221616   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:08.221803   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:08.222253   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1101 00:09:08.222280   30593 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-391061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-391061/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-391061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:09:08.359049   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:09:08.359078   30593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7251/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7251/.minikube}
	I1101 00:09:08.359096   30593 buildroot.go:174] setting up certificates
	I1101 00:09:08.359104   30593 provision.go:83] configureAuth start
	I1101 00:09:08.359112   30593 main.go:141] libmachine: (multinode-391061) Calling .GetMachineName
	I1101 00:09:08.359381   30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
	I1101 00:09:08.361931   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.362234   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.362269   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.362374   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.364658   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.364936   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.364968   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.365105   30593 provision.go:138] copyHostCerts
	I1101 00:09:08.365133   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
	I1101 00:09:08.365172   30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem, removing ...
	I1101 00:09:08.365183   30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
	I1101 00:09:08.365248   30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem (1082 bytes)
	I1101 00:09:08.365344   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
	I1101 00:09:08.365365   30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem, removing ...
	I1101 00:09:08.365372   30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
	I1101 00:09:08.365399   30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem (1123 bytes)
	I1101 00:09:08.365452   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
	I1101 00:09:08.365467   30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem, removing ...
	I1101 00:09:08.365473   30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
	I1101 00:09:08.365494   30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem (1675 bytes)
	I1101 00:09:08.365549   30593 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem org=jenkins.multinode-391061 san=[192.168.39.43 192.168.39.43 localhost 127.0.0.1 minikube multinode-391061]
	I1101 00:09:08.497882   30593 provision.go:172] copyRemoteCerts
	I1101 00:09:08.497940   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:09:08.497965   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.500598   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.500931   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.500961   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.501176   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:08.501356   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.501513   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:08.501639   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
	I1101 00:09:08.594935   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 00:09:08.594993   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:09:08.617737   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 00:09:08.617835   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 00:09:08.639923   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 00:09:08.640003   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 00:09:08.662129   30593 provision.go:86] duration metric: configureAuth took 303.015088ms
	I1101 00:09:08.662155   30593 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:09:08.662403   30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:09:08.662426   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:08.662704   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.665367   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.665756   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.665781   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.665918   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:08.666128   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.666300   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.666449   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:08.666613   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:08.666928   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1101 00:09:08.666940   30593 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 00:09:08.795906   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1101 00:09:08.795936   30593 buildroot.go:70] root file system type: tmpfs
	I1101 00:09:08.796096   30593 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 00:09:08.796134   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.798879   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.799232   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.799265   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.799423   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:08.799598   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.799753   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.799868   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:08.800041   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:08.800361   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1101 00:09:08.800421   30593 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 00:09:08.942805   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 00:09:08.942844   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:08.945908   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.946293   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:08.946326   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:08.946513   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:08.946689   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.946882   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:08.947001   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:08.947184   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:08.947647   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1101 00:09:08.947681   30593 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 00:09:09.848694   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1101 00:09:09.848722   30593 machine.go:91] provisioned docker machine in 1.774228913s
	I1101 00:09:09.848735   30593 start.go:300] post-start starting for "multinode-391061" (driver="kvm2")
	I1101 00:09:09.848748   30593 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:09:09.848772   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:09.849087   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:09:09.849113   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:09.851810   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:09.852197   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:09.852243   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:09.852386   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:09.852556   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:09.852728   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:09.852822   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
	I1101 00:09:09.947639   30593 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:09:09.951509   30593 command_runner.go:130] > NAME=Buildroot
	I1101 00:09:09.951530   30593 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
	I1101 00:09:09.951535   30593 command_runner.go:130] > ID=buildroot
	I1101 00:09:09.951542   30593 command_runner.go:130] > VERSION_ID=2021.02.12
	I1101 00:09:09.951549   30593 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1101 00:09:09.951586   30593 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:09:09.951598   30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/addons for local assets ...
	I1101 00:09:09.951663   30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/files for local assets ...
	I1101 00:09:09.951768   30593 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> 144632.pem in /etc/ssl/certs
	I1101 00:09:09.951785   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> /etc/ssl/certs/144632.pem
	I1101 00:09:09.951898   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:09:09.959594   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:09:09.981962   30593 start.go:303] post-start completed in 133.213964ms
	I1101 00:09:09.982003   30593 fix.go:56] fixHost completed within 20.179294964s
	I1101 00:09:09.982027   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:09.984776   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:09.985223   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:09.985252   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:09.985386   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:09.985595   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:09.985729   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:09.985860   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:09.985979   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:09:09.986435   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1101 00:09:09.986451   30593 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:09:10.119733   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698797350.071514552
	
	I1101 00:09:10.119761   30593 fix.go:206] guest clock: 1698797350.071514552
	I1101 00:09:10.119769   30593 fix.go:219] Guest: 2023-11-01 00:09:10.071514552 +0000 UTC Remote: 2023-11-01 00:09:09.982007618 +0000 UTC m=+20.332511469 (delta=89.506934ms)
	I1101 00:09:10.119793   30593 fix.go:190] guest clock delta is within tolerance: 89.506934ms
	I1101 00:09:10.119800   30593 start.go:83] releasing machines lock for "multinode-391061", held for 20.317128044s
	I1101 00:09:10.119826   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:10.120083   30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
	I1101 00:09:10.122834   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:10.123267   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:10.123301   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:10.123482   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:10.124067   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:10.124267   30593 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:09:10.124386   30593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:09:10.124433   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:10.124459   30593 ssh_runner.go:195] Run: cat /version.json
	I1101 00:09:10.124497   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:09:10.127197   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:10.127360   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:10.127632   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:10.127661   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:10.127789   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:10.127807   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:10.127837   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:10.127985   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:09:10.127991   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:10.128201   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:10.128203   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:09:10.128392   30593 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:09:10.128400   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
	I1101 00:09:10.128527   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
	I1101 00:09:10.219062   30593 command_runner.go:130] > {"iso_version": "v1.32.0-1698773592-17486", "kicbase_version": "v0.0.41-1698660445-17527", "minikube_version": "v1.32.0-beta.0", "commit": "01e1cff766666ed9b9dd97c2a32d71cdb94ff3cf"}
	I1101 00:09:10.244630   30593 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1101 00:09:10.245754   30593 ssh_runner.go:195] Run: systemctl --version
	I1101 00:09:10.251311   30593 command_runner.go:130] > systemd 247 (247)
	I1101 00:09:10.251350   30593 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1101 00:09:10.251621   30593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 00:09:10.256782   30593 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1101 00:09:10.256835   30593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:09:10.256887   30593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:09:10.271406   30593 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1101 00:09:10.271460   30593 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 00:09:10.271470   30593 start.go:472] detecting cgroup driver to use...
	I1101 00:09:10.271565   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:09:10.288462   30593 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1101 00:09:10.288546   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1101 00:09:10.298090   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1101 00:09:10.307653   30593 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1101 00:09:10.307716   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1101 00:09:10.317073   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:09:10.326800   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1101 00:09:10.336055   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:09:10.345573   30593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:09:10.355553   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1101 00:09:10.365472   30593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:09:10.373896   30593 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1101 00:09:10.374055   30593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:09:10.382414   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:09:10.484557   30593 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 00:09:10.503546   30593 start.go:472] detecting cgroup driver to use...
	I1101 00:09:10.503677   30593 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 00:09:10.516143   30593 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1101 00:09:10.517085   30593 command_runner.go:130] > [Unit]
	I1101 00:09:10.517117   30593 command_runner.go:130] > Description=Docker Application Container Engine
	I1101 00:09:10.517127   30593 command_runner.go:130] > Documentation=https://docs.docker.com
	I1101 00:09:10.517135   30593 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1101 00:09:10.517143   30593 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1101 00:09:10.517151   30593 command_runner.go:130] > StartLimitBurst=3
	I1101 00:09:10.517159   30593 command_runner.go:130] > StartLimitIntervalSec=60
	I1101 00:09:10.517169   30593 command_runner.go:130] > [Service]
	I1101 00:09:10.517175   30593 command_runner.go:130] > Type=notify
	I1101 00:09:10.517185   30593 command_runner.go:130] > Restart=on-failure
	I1101 00:09:10.517197   30593 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1101 00:09:10.517218   30593 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1101 00:09:10.517247   30593 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1101 00:09:10.517256   30593 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1101 00:09:10.517266   30593 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1101 00:09:10.517276   30593 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1101 00:09:10.517285   30593 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1101 00:09:10.517306   30593 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1101 00:09:10.517318   30593 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1101 00:09:10.517328   30593 command_runner.go:130] > ExecStart=
	I1101 00:09:10.517356   30593 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1101 00:09:10.517369   30593 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1101 00:09:10.517383   30593 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1101 00:09:10.517397   30593 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1101 00:09:10.517408   30593 command_runner.go:130] > LimitNOFILE=infinity
	I1101 00:09:10.517415   30593 command_runner.go:130] > LimitNPROC=infinity
	I1101 00:09:10.517425   30593 command_runner.go:130] > LimitCORE=infinity
	I1101 00:09:10.517433   30593 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1101 00:09:10.517441   30593 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1101 00:09:10.517447   30593 command_runner.go:130] > TasksMax=infinity
	I1101 00:09:10.517454   30593 command_runner.go:130] > TimeoutStartSec=0
	I1101 00:09:10.517463   30593 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1101 00:09:10.517469   30593 command_runner.go:130] > Delegate=yes
	I1101 00:09:10.517477   30593 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1101 00:09:10.517488   30593 command_runner.go:130] > KillMode=process
	I1101 00:09:10.517502   30593 command_runner.go:130] > [Install]
	I1101 00:09:10.517521   30593 command_runner.go:130] > WantedBy=multi-user.target
	I1101 00:09:10.517760   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:09:10.537353   30593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:09:10.559962   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:09:10.572863   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:09:10.585294   30593 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 00:09:10.613156   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:09:10.626018   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:09:10.642949   30593 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1101 00:09:10.643493   30593 ssh_runner.go:195] Run: which cri-dockerd
	I1101 00:09:10.647034   30593 command_runner.go:130] > /usr/bin/cri-dockerd
	I1101 00:09:10.647148   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1101 00:09:10.656096   30593 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1101 00:09:10.672510   30593 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 00:09:10.775493   30593 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 00:09:10.890922   30593 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1101 00:09:10.891096   30593 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1101 00:09:10.911224   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:09:11.028462   30593 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 00:09:12.495501   30593 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.467002879s)
	I1101 00:09:12.495587   30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:09:12.596857   30593 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1101 00:09:12.696859   30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:09:12.818695   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:09:12.925882   30593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1101 00:09:12.942696   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:09:13.046788   30593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1101 00:09:13.125894   30593 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1101 00:09:13.125989   30593 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1101 00:09:13.131383   30593 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1101 00:09:13.131401   30593 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1101 00:09:13.131407   30593 command_runner.go:130] > Device: 16h/22d	Inode: 823         Links: 1
	I1101 00:09:13.131414   30593 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1101 00:09:13.131420   30593 command_runner.go:130] > Access: 2023-11-01 00:09:13.012751521 +0000
	I1101 00:09:13.131425   30593 command_runner.go:130] > Modify: 2023-11-01 00:09:13.012751521 +0000
	I1101 00:09:13.131432   30593 command_runner.go:130] > Change: 2023-11-01 00:09:13.015751521 +0000
	I1101 00:09:13.131448   30593 command_runner.go:130] >  Birth: -
	I1101 00:09:13.131608   30593 start.go:540] Will wait 60s for crictl version
	I1101 00:09:13.131663   30593 ssh_runner.go:195] Run: which crictl
	I1101 00:09:13.135151   30593 command_runner.go:130] > /usr/bin/crictl
	I1101 00:09:13.135210   30593 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:09:13.203365   30593 command_runner.go:130] > Version:  0.1.0
	I1101 00:09:13.203385   30593 command_runner.go:130] > RuntimeName:  docker
	I1101 00:09:13.203397   30593 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1101 00:09:13.203407   30593 command_runner.go:130] > RuntimeApiVersion:  v1
	I1101 00:09:13.203445   30593 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1101 00:09:13.203500   30593 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 00:09:13.228282   30593 command_runner.go:130] > 24.0.6
	I1101 00:09:13.228417   30593 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 00:09:13.252487   30593 command_runner.go:130] > 24.0.6
	I1101 00:09:13.254840   30593 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1101 00:09:13.254880   30593 main.go:141] libmachine: (multinode-391061) Calling .GetIP
	I1101 00:09:13.257487   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:13.257845   30593 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:05:56 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:09:13.257879   30593 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:09:13.258035   30593 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 00:09:13.261869   30593 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:09:13.272965   30593 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1101 00:09:13.273017   30593 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:09:13.291973   30593 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1101 00:09:13.292012   30593 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 00:09:13.292018   30593 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1101 00:09:13.292023   30593 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1101 00:09:13.292028   30593 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1101 00:09:13.292033   30593 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1101 00:09:13.292039   30593 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1101 00:09:13.292046   30593 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1101 00:09:13.292051   30593 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:09:13.292058   30593 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1101 00:09:13.292659   30593 docker.go:699] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1101 00:09:13.292679   30593 docker.go:629] Images already preloaded, skipping extraction
	I1101 00:09:13.292737   30593 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:09:13.311772   30593 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1101 00:09:13.311797   30593 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1101 00:09:13.311806   30593 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1101 00:09:13.311814   30593 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1101 00:09:13.311821   30593 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1101 00:09:13.311826   30593 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1101 00:09:13.311831   30593 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1101 00:09:13.311836   30593 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1101 00:09:13.311841   30593 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:09:13.311857   30593 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1101 00:09:13.311882   30593 docker.go:699] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1101 00:09:13.311900   30593 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:09:13.311963   30593 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 00:09:13.336389   30593 command_runner.go:130] > cgroupfs
	I1101 00:09:13.336458   30593 cni.go:84] Creating CNI manager for ""
	I1101 00:09:13.336469   30593 cni.go:136] 2 nodes found, recommending kindnet
	I1101 00:09:13.336493   30593 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:09:13.336521   30593 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-391061 NodeName:multinode-391061 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:09:13.336694   30593 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-391061"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:09:13.336788   30593 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-391061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-391061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:09:13.336851   30593 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:09:13.346367   30593 command_runner.go:130] > kubeadm
	I1101 00:09:13.346390   30593 command_runner.go:130] > kubectl
	I1101 00:09:13.346396   30593 command_runner.go:130] > kubelet
	I1101 00:09:13.346518   30593 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:09:13.346594   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:09:13.355275   30593 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 00:09:13.370971   30593 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:09:13.387036   30593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1101 00:09:13.402440   30593 ssh_runner.go:195] Run: grep 192.168.39.43	control-plane.minikube.internal$ /etc/hosts
	I1101 00:09:13.406022   30593 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.43	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:09:13.417070   30593 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061 for IP: 192.168.39.43
	I1101 00:09:13.417103   30593 certs.go:190] acquiring lock for shared ca certs: {Name:mkd78a553474b872bb63abf547b6fa0a317dc3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:09:13.417247   30593 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key
	I1101 00:09:13.417296   30593 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key
	I1101 00:09:13.417388   30593 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.key
	I1101 00:09:13.417450   30593 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.key.7e75dda5
	I1101 00:09:13.417508   30593 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.key
	I1101 00:09:13.417523   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 00:09:13.417544   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 00:09:13.417575   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 00:09:13.417593   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 00:09:13.417603   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 00:09:13.417615   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 00:09:13.417625   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 00:09:13.417636   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 00:09:13.417690   30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem (1338 bytes)
	W1101 00:09:13.417720   30593 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463_empty.pem, impossibly tiny 0 bytes
	I1101 00:09:13.417729   30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:09:13.417752   30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem (1082 bytes)
	I1101 00:09:13.417776   30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:09:13.417804   30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem (1675 bytes)
	I1101 00:09:13.417847   30593 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:09:13.417870   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem -> /usr/share/ca-certificates/14463.pem
	I1101 00:09:13.417882   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> /usr/share/ca-certificates/144632.pem
	I1101 00:09:13.417894   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:13.418474   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:09:13.440131   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 00:09:13.461354   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:09:13.484158   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 00:09:13.507642   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:09:13.530560   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 00:09:13.552173   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:09:13.572803   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:09:13.594200   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem --> /usr/share/ca-certificates/14463.pem (1338 bytes)
	I1101 00:09:13.614546   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /usr/share/ca-certificates/144632.pem (1708 bytes)
	I1101 00:09:13.635287   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:09:13.655804   30593 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:09:13.671160   30593 ssh_runner.go:195] Run: openssl version
	I1101 00:09:13.676595   30593 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1101 00:09:13.676661   30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14463.pem && ln -fs /usr/share/ca-certificates/14463.pem /etc/ssl/certs/14463.pem"
	I1101 00:09:13.687719   30593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14463.pem
	I1101 00:09:13.692306   30593 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 23:48 /usr/share/ca-certificates/14463.pem
	I1101 00:09:13.692356   30593 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:48 /usr/share/ca-certificates/14463.pem
	I1101 00:09:13.692398   30593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14463.pem
	I1101 00:09:13.697913   30593 command_runner.go:130] > 51391683
	I1101 00:09:13.698156   30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14463.pem /etc/ssl/certs/51391683.0"
	I1101 00:09:13.708708   30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144632.pem && ln -fs /usr/share/ca-certificates/144632.pem /etc/ssl/certs/144632.pem"
	I1101 00:09:13.718932   30593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144632.pem
	I1101 00:09:13.723625   30593 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 23:48 /usr/share/ca-certificates/144632.pem
	I1101 00:09:13.723665   30593 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:48 /usr/share/ca-certificates/144632.pem
	I1101 00:09:13.723717   30593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144632.pem
	I1101 00:09:13.729381   30593 command_runner.go:130] > 3ec20f2e
	I1101 00:09:13.729472   30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144632.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:09:13.739928   30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:09:13.749888   30593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:13.754135   30593 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:13.754186   30593 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:13.754224   30593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:09:13.759372   30593 command_runner.go:130] > b5213941
	I1101 00:09:13.759586   30593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:09:13.770878   30593 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:09:13.774944   30593 command_runner.go:130] > ca.crt
	I1101 00:09:13.774961   30593 command_runner.go:130] > ca.key
	I1101 00:09:13.774966   30593 command_runner.go:130] > healthcheck-client.crt
	I1101 00:09:13.774977   30593 command_runner.go:130] > healthcheck-client.key
	I1101 00:09:13.774981   30593 command_runner.go:130] > peer.crt
	I1101 00:09:13.774985   30593 command_runner.go:130] > peer.key
	I1101 00:09:13.774988   30593 command_runner.go:130] > server.crt
	I1101 00:09:13.774993   30593 command_runner.go:130] > server.key
	I1101 00:09:13.775195   30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 00:09:13.780693   30593 command_runner.go:130] > Certificate will not expire
	I1101 00:09:13.781005   30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 00:09:13.786438   30593 command_runner.go:130] > Certificate will not expire
	I1101 00:09:13.786773   30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 00:09:13.792247   30593 command_runner.go:130] > Certificate will not expire
	I1101 00:09:13.792305   30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 00:09:13.797510   30593 command_runner.go:130] > Certificate will not expire
	I1101 00:09:13.797845   30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 00:09:13.803206   30593 command_runner.go:130] > Certificate will not expire
	I1101 00:09:13.803273   30593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 00:09:13.808620   30593 command_runner.go:130] > Certificate will not expire
	I1101 00:09:13.808816   30593 kubeadm.go:404] StartCluster: {Name:multinode-391061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-391061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kube
virt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:09:13.808974   30593 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 00:09:13.826906   30593 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 00:09:13.836480   30593 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1101 00:09:13.836509   30593 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1101 00:09:13.836518   30593 command_runner.go:130] > /var/lib/minikube/etcd:
	I1101 00:09:13.836524   30593 command_runner.go:130] > member
	I1101 00:09:13.836597   30593 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 00:09:13.836612   30593 kubeadm.go:636] restartCluster start
	I1101 00:09:13.836669   30593 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 00:09:13.845747   30593 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:13.846165   30593 kubeconfig.go:135] verify returned: extract IP: "multinode-391061" does not appear in /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:09:13.846289   30593 kubeconfig.go:146] "multinode-391061" context is missing from /home/jenkins/minikube-integration/17486-7251/kubeconfig - will repair!
	I1101 00:09:13.846620   30593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:09:13.847028   30593 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:09:13.847260   30593 kapi.go:59] client config for multinode-391061: &rest.Config{Host:"https://192.168.39.43:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:09:13.847933   30593 cert_rotation.go:137] Starting client certificate rotation controller
	I1101 00:09:13.848016   30593 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 00:09:13.857014   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:13.857066   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:13.868306   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:13.868326   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:13.868365   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:13.879425   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:14.380169   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:14.380271   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:14.393563   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:14.879961   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:14.880030   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:14.891500   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:15.380030   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:15.380116   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:15.394849   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:15.880377   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:15.880462   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:15.892276   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:16.379827   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:16.379933   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:16.391756   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:16.880389   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:16.880484   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:16.892186   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:17.379748   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:17.379838   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:17.391913   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:17.880537   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:17.880630   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:17.893349   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:18.379933   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:18.380022   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:18.391643   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:18.880268   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:18.880355   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:18.892132   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:19.379676   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:19.379760   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:19.391501   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:19.880377   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:19.880494   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:19.892270   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:20.379875   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:20.379968   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:20.391559   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:20.880250   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:20.880355   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:20.891729   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:21.380337   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:21.380407   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:21.391986   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:21.879571   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:21.879681   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:21.891291   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:22.379884   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:22.379978   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:22.391825   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:22.880476   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:22.880570   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:22.892224   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:23.379724   30593 api_server.go:166] Checking apiserver status ...
	I1101 00:09:23.379835   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:09:23.391883   30593 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:09:23.857628   30593 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 00:09:23.857661   30593 kubeadm.go:1128] stopping kube-system containers ...
	I1101 00:09:23.857758   30593 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 00:09:23.879399   30593 command_runner.go:130] > c8ec107c7b83
	I1101 00:09:23.879423   30593 command_runner.go:130] > 8a050fec9e56
	I1101 00:09:23.879444   30593 command_runner.go:130] > 0922f8b627ba
	I1101 00:09:23.879448   30593 command_runner.go:130] > 7e5dd13abba8
	I1101 00:09:23.879453   30593 command_runner.go:130] > 717d368b8c2a
	I1101 00:09:23.879456   30593 command_runner.go:130] > beeaf0ac020b
	I1101 00:09:23.879460   30593 command_runner.go:130] > d52c65ebca75
	I1101 00:09:23.879464   30593 command_runner.go:130] > 5c355a51915e
	I1101 00:09:23.879467   30593 command_runner.go:130] > 6e72da581d8b
	I1101 00:09:23.879471   30593 command_runner.go:130] > 37d9dd0022b9
	I1101 00:09:23.879475   30593 command_runner.go:130] > c5ea3d84d06f
	I1101 00:09:23.879479   30593 command_runner.go:130] > 32294fac02b3
	I1101 00:09:23.879482   30593 command_runner.go:130] > a49a86a47d7c
	I1101 00:09:23.879486   30593 command_runner.go:130] > 36d5f0bd5cf2
	I1101 00:09:23.879494   30593 command_runner.go:130] > 92b70c8321ee
	I1101 00:09:23.879498   30593 command_runner.go:130] > 9f5176fde232
	I1101 00:09:23.879502   30593 command_runner.go:130] > f576715f1f47
	I1101 00:09:23.879506   30593 command_runner.go:130] > 44a2cc98732a
	I1101 00:09:23.879509   30593 command_runner.go:130] > 5a2e590156b6
	I1101 00:09:23.879518   30593 command_runner.go:130] > feea3a57d77e
	I1101 00:09:23.879525   30593 command_runner.go:130] > 7ad930b36263
	I1101 00:09:23.879528   30593 command_runner.go:130] > b110676d9563
	I1101 00:09:23.879533   30593 command_runner.go:130] > 8659d1168087
	I1101 00:09:23.879540   30593 command_runner.go:130] > 7f78495183a7
	I1101 00:09:23.879543   30593 command_runner.go:130] > 21b2a7338538
	I1101 00:09:23.879547   30593 command_runner.go:130] > 2b739c443c07
	I1101 00:09:23.879553   30593 command_runner.go:130] > f8c33525e5e4
	I1101 00:09:23.879557   30593 command_runner.go:130] > b6d83949182f
	I1101 00:09:23.879561   30593 command_runner.go:130] > 8dc7f1a0f0cf
	I1101 00:09:23.879565   30593 command_runner.go:130] > d114ab0f9727
	I1101 00:09:23.879569   30593 command_runner.go:130] > 88e660774880
	I1101 00:09:23.880506   30593 docker.go:470] Stopping containers: [c8ec107c7b83 8a050fec9e56 0922f8b627ba 7e5dd13abba8 717d368b8c2a beeaf0ac020b d52c65ebca75 5c355a51915e 6e72da581d8b 37d9dd0022b9 c5ea3d84d06f 32294fac02b3 a49a86a47d7c 36d5f0bd5cf2 92b70c8321ee 9f5176fde232 f576715f1f47 44a2cc98732a 5a2e590156b6 feea3a57d77e 7ad930b36263 b110676d9563 8659d1168087 7f78495183a7 21b2a7338538 2b739c443c07 f8c33525e5e4 b6d83949182f 8dc7f1a0f0cf d114ab0f9727 88e660774880]
	I1101 00:09:23.880594   30593 ssh_runner.go:195] Run: docker stop c8ec107c7b83 8a050fec9e56 0922f8b627ba 7e5dd13abba8 717d368b8c2a beeaf0ac020b d52c65ebca75 5c355a51915e 6e72da581d8b 37d9dd0022b9 c5ea3d84d06f 32294fac02b3 a49a86a47d7c 36d5f0bd5cf2 92b70c8321ee 9f5176fde232 f576715f1f47 44a2cc98732a 5a2e590156b6 feea3a57d77e 7ad930b36263 b110676d9563 8659d1168087 7f78495183a7 21b2a7338538 2b739c443c07 f8c33525e5e4 b6d83949182f 8dc7f1a0f0cf d114ab0f9727 88e660774880
	I1101 00:09:23.906747   30593 command_runner.go:130] > c8ec107c7b83
	I1101 00:09:23.906784   30593 command_runner.go:130] > 8a050fec9e56
	I1101 00:09:23.906790   30593 command_runner.go:130] > 0922f8b627ba
	I1101 00:09:23.906941   30593 command_runner.go:130] > 7e5dd13abba8
	I1101 00:09:23.907074   30593 command_runner.go:130] > 717d368b8c2a
	I1101 00:09:23.907086   30593 command_runner.go:130] > beeaf0ac020b
	I1101 00:09:23.907092   30593 command_runner.go:130] > d52c65ebca75
	I1101 00:09:23.907110   30593 command_runner.go:130] > 5c355a51915e
	I1101 00:09:23.907116   30593 command_runner.go:130] > 6e72da581d8b
	I1101 00:09:23.907123   30593 command_runner.go:130] > 37d9dd0022b9
	I1101 00:09:23.907130   30593 command_runner.go:130] > c5ea3d84d06f
	I1101 00:09:23.907139   30593 command_runner.go:130] > 32294fac02b3
	I1101 00:09:23.907146   30593 command_runner.go:130] > a49a86a47d7c
	I1101 00:09:23.907157   30593 command_runner.go:130] > 36d5f0bd5cf2
	I1101 00:09:23.907168   30593 command_runner.go:130] > 92b70c8321ee
	I1101 00:09:23.907176   30593 command_runner.go:130] > 9f5176fde232
	I1101 00:09:23.907188   30593 command_runner.go:130] > f576715f1f47
	I1101 00:09:23.907198   30593 command_runner.go:130] > 44a2cc98732a
	I1101 00:09:23.907202   30593 command_runner.go:130] > 5a2e590156b6
	I1101 00:09:23.907207   30593 command_runner.go:130] > feea3a57d77e
	I1101 00:09:23.907213   30593 command_runner.go:130] > 7ad930b36263
	I1101 00:09:23.907220   30593 command_runner.go:130] > b110676d9563
	I1101 00:09:23.907227   30593 command_runner.go:130] > 8659d1168087
	I1101 00:09:23.907238   30593 command_runner.go:130] > 7f78495183a7
	I1101 00:09:23.907244   30593 command_runner.go:130] > 21b2a7338538
	I1101 00:09:23.907254   30593 command_runner.go:130] > 2b739c443c07
	I1101 00:09:23.907263   30593 command_runner.go:130] > f8c33525e5e4
	I1101 00:09:23.907270   30593 command_runner.go:130] > b6d83949182f
	I1101 00:09:23.907278   30593 command_runner.go:130] > 8dc7f1a0f0cf
	I1101 00:09:23.907284   30593 command_runner.go:130] > d114ab0f9727
	I1101 00:09:23.907288   30593 command_runner.go:130] > 88e660774880
	I1101 00:09:23.908329   30593 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 00:09:23.924405   30593 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 00:09:23.933413   30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1101 00:09:23.933460   30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1101 00:09:23.933474   30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1101 00:09:23.933508   30593 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:09:23.933573   30593 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:09:23.933632   30593 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 00:09:23.942681   30593 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 00:09:23.942716   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:09:24.061200   30593 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 00:09:24.061740   30593 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1101 00:09:24.062273   30593 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1101 00:09:24.062864   30593 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 00:09:24.063543   30593 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1101 00:09:24.064483   30593 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1101 00:09:24.065146   30593 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1101 00:09:24.065723   30593 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1101 00:09:24.066240   30593 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1101 00:09:24.066826   30593 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 00:09:24.067296   30593 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 00:09:24.067896   30593 command_runner.go:130] > [certs] Using the existing "sa" key
	I1101 00:09:24.069200   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:09:24.889031   30593 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 00:09:24.889057   30593 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 00:09:24.889063   30593 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 00:09:24.889069   30593 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 00:09:24.889075   30593 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 00:09:24.889099   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:09:25.068922   30593 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:09:25.068953   30593 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:09:25.068959   30593 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1101 00:09:25.069343   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:09:25.134897   30593 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 00:09:25.134925   30593 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 00:09:25.141279   30593 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 00:09:25.148755   30593 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 00:09:25.153988   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:09:25.224920   30593 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 00:09:25.228266   30593 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:09:25.228336   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:25.246286   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:25.761474   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:26.261798   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:26.761515   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:27.261570   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:27.761008   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:27.804720   30593 command_runner.go:130] > 1704
	I1101 00:09:27.806000   30593 api_server.go:72] duration metric: took 2.577736282s to wait for apiserver process to appear ...
	I1101 00:09:27.806022   30593 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:09:27.806041   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:27.806649   30593 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
	I1101 00:09:27.806703   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:27.807202   30593 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
	I1101 00:09:28.307960   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:31.401471   30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 00:09:31.401504   30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 00:09:31.401515   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:31.478349   30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 00:09:31.478386   30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 00:09:31.807657   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:31.816386   30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 00:09:31.816421   30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 00:09:32.308084   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:32.313351   30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 00:09:32.313393   30593 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 00:09:32.807687   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:32.814924   30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I1101 00:09:32.815019   30593 round_trippers.go:463] GET https://192.168.39.43:8443/version
	I1101 00:09:32.815029   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:32.815039   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:32.815049   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:32.823839   30593 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1101 00:09:32.823862   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:32.823873   30593 round_trippers.go:580]     Audit-Id: 654a1cb8-a85b-41cb-aea3-21ea6bc79004
	I1101 00:09:32.823885   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:32.823891   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:32.823898   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:32.823905   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:32.823913   30593 round_trippers.go:580]     Content-Length: 264
	I1101 00:09:32.823921   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:32 GMT
	I1101 00:09:32.823947   30593 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1101 00:09:32.824032   30593 api_server.go:141] control plane version: v1.28.3
	I1101 00:09:32.824050   30593 api_server.go:131] duration metric: took 5.018019595s to wait for apiserver health ...
	I1101 00:09:32.824061   30593 cni.go:84] Creating CNI manager for ""
	I1101 00:09:32.824070   30593 cni.go:136] 2 nodes found, recommending kindnet
	I1101 00:09:32.826169   30593 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1101 00:09:32.827914   30593 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 00:09:32.841919   30593 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1101 00:09:32.841942   30593 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1101 00:09:32.841948   30593 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1101 00:09:32.841955   30593 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 00:09:32.841960   30593 command_runner.go:130] > Access: 2023-11-01 00:09:01.939751521 +0000
	I1101 00:09:32.841969   30593 command_runner.go:130] > Modify: 2023-10-31 23:04:20.000000000 +0000
	I1101 00:09:32.841974   30593 command_runner.go:130] > Change: 2023-11-01 00:09:00.154751521 +0000
	I1101 00:09:32.841979   30593 command_runner.go:130] >  Birth: -
	I1101 00:09:32.843041   30593 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1101 00:09:32.843061   30593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1101 00:09:32.868639   30593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 00:09:34.233741   30593 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1101 00:09:34.264714   30593 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1101 00:09:34.269029   30593 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1101 00:09:34.306476   30593 command_runner.go:130] > daemonset.apps/kindnet configured
	I1101 00:09:34.313598   30593 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.44492846s)
	I1101 00:09:34.313628   30593 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:09:34.313739   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:34.313753   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.313764   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.313774   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.328832   30593 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1101 00:09:34.328855   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.328863   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.328871   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.328944   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.328962   30593 round_trippers.go:580]     Audit-Id: 9a80f099-79a4-48ce-bc32-9266f1c0dc9f
	I1101 00:09:34.328971   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.328985   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.330618   30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1205"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84772 chars]
	I1101 00:09:34.334579   30593 system_pods.go:59] 12 kube-system pods found
	I1101 00:09:34.334612   30593 system_pods.go:61] "coredns-5dd5756b68-dg5w7" [eb94555e-1465-4dec-9d6d-ebcbec02841e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 00:09:34.334627   30593 system_pods.go:61] "etcd-multinode-391061" [0537cc4c-2127-4424-b02f-9e4747bc8713] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 00:09:34.334633   30593 system_pods.go:61] "kindnet-4jfj9" [2559e20b-85cf-43d5-8663-7ec855d71df9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 00:09:34.334638   30593 system_pods.go:61] "kindnet-lcljq" [171d5f22-d781-4224-88f7-f940ad9e747b] Running
	I1101 00:09:34.334642   30593 system_pods.go:61] "kindnet-wrdhd" [85db010e-82bd-4efa-a760-0669bf1e52de] Running
	I1101 00:09:34.334649   30593 system_pods.go:61] "kube-apiserver-multinode-391061" [dff82899-3db2-46a2-aea0-ec57d58be1c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 00:09:34.334659   30593 system_pods.go:61] "kube-controller-manager-multinode-391061" [4775e566-6acd-43ac-b7cd-8dbd245c33cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 00:09:34.334666   30593 system_pods.go:61] "kube-proxy-clsrp" [a747b091-d679-4ae6-a995-c980235c9a61] Running
	I1101 00:09:34.334670   30593 system_pods.go:61] "kube-proxy-rcnv9" [9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9] Running
	I1101 00:09:34.334674   30593 system_pods.go:61] "kube-proxy-vdjh2" [9838a111-09e4-4975-b925-1ae5dcfa7334] Running
	I1101 00:09:34.334679   30593 system_pods.go:61] "kube-scheduler-multinode-391061" [eaf767ff-8f68-4b91-bcd7-b550481a6155] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 00:09:34.334685   30593 system_pods.go:61] "storage-provisioner" [b0b970e9-7d0b-4e94-8ca8-2f3348eaf579] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 00:09:34.334691   30593 system_pods.go:74] duration metric: took 21.056413ms to wait for pod list to return data ...
	I1101 00:09:34.334704   30593 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:09:34.334757   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes
	I1101 00:09:34.334764   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.334771   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.334777   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.340145   30593 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1101 00:09:34.340163   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.340169   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.340175   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.340180   30593 round_trippers.go:580]     Audit-Id: 1531eb5d-604e-4c94-96b1-59616ac61bc1
	I1101 00:09:34.340185   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.340189   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.340199   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.340500   30593 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1205"},"items":[{"metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 9590 chars]
	I1101 00:09:34.341106   30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:09:34.341127   30593 node_conditions.go:123] node cpu capacity is 2
	I1101 00:09:34.341135   30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:09:34.341139   30593 node_conditions.go:123] node cpu capacity is 2
	I1101 00:09:34.341143   30593 node_conditions.go:105] duration metric: took 6.435475ms to run NodePressure ...
	I1101 00:09:34.341158   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:09:34.596643   30593 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1101 00:09:34.664781   30593 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1101 00:09:34.667106   30593 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 00:09:34.667212   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1101 00:09:34.667221   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.667228   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.667234   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.673886   30593 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1101 00:09:34.673905   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.673912   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.673918   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.673923   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.673936   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.673941   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.673946   30593 round_trippers.go:580]     Audit-Id: 7dc67d14-eb2e-46d1-aa78-54d52af1af34
	I1101 00:09:34.675336   30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"etcd-multinode-391061","namespace":"kube-system","uid":"0537cc4c-2127-4424-b02f-9e4747bc8713","resourceVersion":"1180","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.43:2379","kubernetes.io/config.hash":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.mirror":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.seen":"2023-11-01T00:02:21.059094445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29766 chars]
	I1101 00:09:34.676627   30593 kubeadm.go:787] kubelet initialised
	I1101 00:09:34.676644   30593 kubeadm.go:788] duration metric: took 9.518378ms waiting for restarted kubelet to initialise ...
	I1101 00:09:34.676651   30593 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:09:34.676705   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:34.676713   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.676720   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.676728   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.683293   30593 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1101 00:09:34.683308   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.683315   30593 round_trippers.go:580]     Audit-Id: b0192f99-985e-4aae-927b-c47d95fe8014
	I1101 00:09:34.683321   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.683327   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.683332   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.683338   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.683350   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.685550   30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84772 chars]
	I1101 00:09:34.688329   30593 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.688397   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:34.688408   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.688416   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.688421   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.698455   30593 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1101 00:09:34.699740   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.699755   30593 round_trippers.go:580]     Audit-Id: eb7d9633-7fab-456d-a9f4-795f402a1e5a
	I1101 00:09:34.699764   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.699774   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.699785   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.699794   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.699803   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.699985   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:34.700490   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:34.700507   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.700517   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.700526   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.713644   30593 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1101 00:09:34.713666   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.713679   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.713686   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.713694   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.713702   30593 round_trippers.go:580]     Audit-Id: ee2f8b85-6ebc-4ce5-b02d-f9b38983f319
	I1101 00:09:34.713710   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.713722   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.713963   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:34.714314   30593 pod_ready.go:97] node "multinode-391061" hosting pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.714332   30593 pod_ready.go:81] duration metric: took 25.984465ms waiting for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:34.714343   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.714355   30593 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.714451   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-391061
	I1101 00:09:34.714465   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.714476   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.714486   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.716800   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:34.716818   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.716827   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.716838   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.716846   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.716854   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.716866   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.716879   30593 round_trippers.go:580]     Audit-Id: 0183d545-7a83-4bf3-bb19-280d54d90e72
	I1101 00:09:34.717288   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-391061","namespace":"kube-system","uid":"0537cc4c-2127-4424-b02f-9e4747bc8713","resourceVersion":"1180","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.43:2379","kubernetes.io/config.hash":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.mirror":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.seen":"2023-11-01T00:02:21.059094445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6296 chars]
	I1101 00:09:34.717688   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:34.717702   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.717708   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.717715   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.719608   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:34.719624   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.719632   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.719640   30593 round_trippers.go:580]     Audit-Id: cc656017-62ca-46cc-93aa-6f56e0bacf57
	I1101 00:09:34.719647   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.719655   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.719663   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.719673   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.719831   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:34.720155   30593 pod_ready.go:97] node "multinode-391061" hosting pod "etcd-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.720173   30593 pod_ready.go:81] duration metric: took 5.809883ms waiting for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:34.720181   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "etcd-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.720222   30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.720281   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:34.720291   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.720302   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.720316   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.727693   30593 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1101 00:09:34.727724   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.727735   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.727746   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.727757   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.727768   30593 round_trippers.go:580]     Audit-Id: f429dcbd-b1c6-47e9-b094-3b51b74fd598
	I1101 00:09:34.727779   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.727790   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.727953   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:34.728461   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:34.728479   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.728490   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.728500   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.730599   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:34.730613   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.730619   30593 round_trippers.go:580]     Audit-Id: 0de3f8aa-089c-4434-b8d3-d71e99713bfd
	I1101 00:09:34.730624   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.730632   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.730644   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.730660   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.730670   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.730850   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:34.731213   30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-apiserver-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.731234   30593 pod_ready.go:81] duration metric: took 11.0013ms waiting for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:34.731247   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-apiserver-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.731266   30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.731321   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-391061
	I1101 00:09:34.731332   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.731342   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.731350   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.735460   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:34.735475   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.735481   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.735488   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.735501   30593 round_trippers.go:580]     Audit-Id: 2bd7494f-9968-4fd2-aca0-bb70496933d6
	I1101 00:09:34.735518   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.735525   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.735540   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.735848   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-391061","namespace":"kube-system","uid":"4775e566-6acd-43ac-b7cd-8dbd245c33cf","resourceVersion":"1178","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.mirror":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059092388Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I1101 00:09:34.736287   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:34.736300   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.736307   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.736315   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.738460   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:34.738480   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.738490   30593 round_trippers.go:580]     Audit-Id: b9555108-2183-46ca-b82f-b9cd6213e770
	I1101 00:09:34.738511   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.738524   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.738532   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.738547   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.738555   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.738690   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:34.739057   30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-controller-manager-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.739086   30593 pod_ready.go:81] duration metric: took 7.809638ms waiting for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:34.739103   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-controller-manager-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:34.739113   30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:34.914034   30593 request.go:629] Waited for 174.835524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clsrp
	I1101 00:09:34.914109   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clsrp
	I1101 00:09:34.914114   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:34.914121   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:34.914131   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:34.916919   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:34.916946   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:34.916955   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:34.916964   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:34.916972   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:34.916983   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:34.916990   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:34 GMT
	I1101 00:09:34.917003   30593 round_trippers.go:580]     Audit-Id: 7b74a314-8cec-4d22-9be3-8af74ba926c4
	I1101 00:09:34.917222   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-clsrp","generateName":"kube-proxy-","namespace":"kube-system","uid":"a747b091-d679-4ae6-a995-c980235c9a61","resourceVersion":"1203","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
	I1101 00:09:35.113972   30593 request.go:629] Waited for 196.314968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:35.114094   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:35.114106   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:35.114117   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:35.114128   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:35.116700   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:35.116727   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:35.116736   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:35.116744   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:35 GMT
	I1101 00:09:35.116752   30593 round_trippers.go:580]     Audit-Id: 520e1602-a5d2-496e-9336-3d05ae9bf431
	I1101 00:09:35.116760   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:35.116769   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:35.116778   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:35.116880   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:35.117203   30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-proxy-clsrp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:35.117220   30593 pod_ready.go:81] duration metric: took 378.09771ms waiting for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:35.117234   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-proxy-clsrp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:35.117249   30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:35.314720   30593 request.go:629] Waited for 197.37685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
	I1101 00:09:35.314784   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
	I1101 00:09:35.314790   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:35.314797   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:35.314806   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:35.317474   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:35.317495   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:35.317502   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:35 GMT
	I1101 00:09:35.317508   30593 round_trippers.go:580]     Audit-Id: 9af5c93f-eeb8-4bf5-91cf-0004ad594526
	I1101 00:09:35.317513   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:35.317526   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:35.317532   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:35.317537   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:35.317656   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rcnv9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9","resourceVersion":"983","creationTimestamp":"2023-11-01T00:03:22Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:03:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5749 chars]
	I1101 00:09:35.514541   30593 request.go:629] Waited for 196.422301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
	I1101 00:09:35.514605   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
	I1101 00:09:35.514610   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:35.514620   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:35.514626   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:35.516964   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:35.516981   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:35.516987   30593 round_trippers.go:580]     Audit-Id: f60ca5be-eff7-45b6-b4ef-25a4244f2ac8
	I1101 00:09:35.516992   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:35.516999   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:35.517007   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:35.517016   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:35.517024   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:35 GMT
	I1101 00:09:35.517144   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061-m02","uid":"75fe164a-6fd6-4525-bacf-d792a509255b","resourceVersion":"999","creationTimestamp":"2023-11-01T00:07:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3253 chars]
	I1101 00:09:35.517386   30593 pod_ready.go:92] pod "kube-proxy-rcnv9" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:35.517399   30593 pod_ready.go:81] duration metric: took 400.144025ms waiting for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:35.517407   30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:35.713801   30593 request.go:629] Waited for 196.321571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
	I1101 00:09:35.713897   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
	I1101 00:09:35.713902   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:35.713912   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:35.713919   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:35.718570   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:35.718593   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:35.718599   30593 round_trippers.go:580]     Audit-Id: a80b7d1f-2804-4453-9d76-e2f5feeecd8b
	I1101 00:09:35.718604   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:35.718609   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:35.718614   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:35.718619   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:35.718624   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:35 GMT
	I1101 00:09:35.719017   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vdjh2","generateName":"kube-proxy-","namespace":"kube-system","uid":"9838a111-09e4-4975-b925-1ae5dcfa7334","resourceVersion":"1096","creationTimestamp":"2023-11-01T00:04:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1101 00:09:35.914812   30593 request.go:629] Waited for 195.361033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
	I1101 00:09:35.914878   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
	I1101 00:09:35.914884   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:35.914892   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:35.914905   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:35.918630   30593 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I1101 00:09:35.918651   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:35.918658   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:35.918669   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:35.918675   30593 round_trippers.go:580]     Content-Length: 210
	I1101 00:09:35.918680   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:35 GMT
	I1101 00:09:35.918685   30593 round_trippers.go:580]     Audit-Id: 8559bcdf-7ea2-4533-82a7-71b9489af62e
	I1101 00:09:35.918693   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:35.918698   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:35.918716   30593 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-391061-m03\" not found","reason":"NotFound","details":{"name":"multinode-391061-m03","kind":"nodes"},"code":404}
	I1101 00:09:35.918899   30593 pod_ready.go:97] node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
	I1101 00:09:35.918915   30593 pod_ready.go:81] duration metric: took 401.503391ms waiting for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:35.918928   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
	I1101 00:09:35.918938   30593 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:36.114381   30593 request.go:629] Waited for 195.370649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:36.114441   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:36.114446   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:36.114453   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:36.114459   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:36.117280   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:36.117299   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:36.117305   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:36.117310   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:36.117316   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:36.117324   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:36 GMT
	I1101 00:09:36.117332   30593 round_trippers.go:580]     Audit-Id: 1a904aba-8eb8-4b24-84bc-bed0f6168940
	I1101 00:09:36.117345   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:36.117488   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1187","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I1101 00:09:36.314311   30593 request.go:629] Waited for 196.435913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:36.314416   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:36.314424   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:36.314432   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:36.314438   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:36.317156   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:36.317180   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:36.317187   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:36.317193   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:36 GMT
	I1101 00:09:36.317198   30593 round_trippers.go:580]     Audit-Id: 438f8f57-c6d3-4b09-82e1-c9c57e8542d5
	I1101 00:09:36.317207   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:36.317226   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:36.317232   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:36.317370   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:36.317685   30593 pod_ready.go:97] node "multinode-391061" hosting pod "kube-scheduler-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:36.317702   30593 pod_ready.go:81] duration metric: took 398.74998ms waiting for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:36.317710   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061" hosting pod "kube-scheduler-multinode-391061" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-391061" has status "Ready":"False"
	I1101 00:09:36.317717   30593 pod_ready.go:38] duration metric: took 1.641059341s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:09:36.317736   30593 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 00:09:36.328581   30593 command_runner.go:130] > -16
	I1101 00:09:36.329017   30593 ops.go:34] apiserver oom_adj: -16
	I1101 00:09:36.329031   30593 kubeadm.go:640] restartCluster took 22.492412523s
	I1101 00:09:36.329039   30593 kubeadm.go:406] StartCluster complete in 22.520229717s
	I1101 00:09:36.329066   30593 settings.go:142] acquiring lock: {Name:mk57c659cffa0c6a1b184e5906c662f85ff8a099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:09:36.329145   30593 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:09:36.329734   30593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:09:36.329976   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 00:09:36.330139   30593 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 00:09:36.330259   30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:09:36.332516   30593 out.go:177] * Enabled addons: 
	I1101 00:09:36.330334   30593 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:09:36.334140   30593 addons.go:502] enable addons completed in 4.002956ms: enabled=[]
	I1101 00:09:36.332878   30593 kapi.go:59] client config for multinode-391061: &rest.Config{Host:"https://192.168.39.43:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/client.key", CAFile:"/home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:09:36.334423   30593 round_trippers.go:463] GET https://192.168.39.43:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 00:09:36.334436   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:36.334446   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:36.334454   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:36.337955   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:36.337986   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:36.337996   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:36.338004   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:36.338012   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:36.338027   30593 round_trippers.go:580]     Content-Length: 292
	I1101 00:09:36.338038   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:36 GMT
	I1101 00:09:36.338050   30593 round_trippers.go:580]     Audit-Id: 9324051b-7b18-4bb3-a5fe-00967444602f
	I1101 00:09:36.338061   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:36.338088   30593 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0a6ee33a-4e79-49d5-be0e-4e19b76eb2c6","resourceVersion":"1206","creationTimestamp":"2023-11-01T00:02:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1101 00:09:36.338210   30593 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-391061" context rescaled to 1 replicas
	I1101 00:09:36.338240   30593 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 00:09:36.340479   30593 out.go:177] * Verifying Kubernetes components...
	I1101 00:09:36.342243   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:09:36.464070   30593 command_runner.go:130] > apiVersion: v1
	I1101 00:09:36.464088   30593 command_runner.go:130] > data:
	I1101 00:09:36.464092   30593 command_runner.go:130] >   Corefile: |
	I1101 00:09:36.464096   30593 command_runner.go:130] >     .:53 {
	I1101 00:09:36.464099   30593 command_runner.go:130] >         log
	I1101 00:09:36.464104   30593 command_runner.go:130] >         errors
	I1101 00:09:36.464108   30593 command_runner.go:130] >         health {
	I1101 00:09:36.464112   30593 command_runner.go:130] >            lameduck 5s
	I1101 00:09:36.464116   30593 command_runner.go:130] >         }
	I1101 00:09:36.464124   30593 command_runner.go:130] >         ready
	I1101 00:09:36.464129   30593 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1101 00:09:36.464134   30593 command_runner.go:130] >            pods insecure
	I1101 00:09:36.464139   30593 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1101 00:09:36.464143   30593 command_runner.go:130] >            ttl 30
	I1101 00:09:36.464147   30593 command_runner.go:130] >         }
	I1101 00:09:36.464151   30593 command_runner.go:130] >         prometheus :9153
	I1101 00:09:36.464154   30593 command_runner.go:130] >         hosts {
	I1101 00:09:36.464159   30593 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1101 00:09:36.464163   30593 command_runner.go:130] >            fallthrough
	I1101 00:09:36.464167   30593 command_runner.go:130] >         }
	I1101 00:09:36.464175   30593 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1101 00:09:36.464180   30593 command_runner.go:130] >            max_concurrent 1000
	I1101 00:09:36.464184   30593 command_runner.go:130] >         }
	I1101 00:09:36.464188   30593 command_runner.go:130] >         cache 30
	I1101 00:09:36.464193   30593 command_runner.go:130] >         loop
	I1101 00:09:36.464198   30593 command_runner.go:130] >         reload
	I1101 00:09:36.464202   30593 command_runner.go:130] >         loadbalance
	I1101 00:09:36.464217   30593 command_runner.go:130] >     }
	I1101 00:09:36.464224   30593 command_runner.go:130] > kind: ConfigMap
	I1101 00:09:36.464228   30593 command_runner.go:130] > metadata:
	I1101 00:09:36.464233   30593 command_runner.go:130] >   creationTimestamp: "2023-11-01T00:02:20Z"
	I1101 00:09:36.464237   30593 command_runner.go:130] >   name: coredns
	I1101 00:09:36.464242   30593 command_runner.go:130] >   namespace: kube-system
	I1101 00:09:36.464246   30593 command_runner.go:130] >   resourceVersion: "404"
	I1101 00:09:36.464251   30593 command_runner.go:130] >   uid: 9916bcab-f9a6-4b1c-a0a4-a33e2e2f738c
	I1101 00:09:36.466580   30593 node_ready.go:35] waiting up to 6m0s for node "multinode-391061" to be "Ready" ...
	I1101 00:09:36.466667   30593 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1101 00:09:36.513888   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:36.513918   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:36.513926   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:36.513933   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:36.516967   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:36.516991   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:36.517002   30593 round_trippers.go:580]     Audit-Id: 4d84eb47-da1a-4fd0-96d7-b23c142dcf7c
	I1101 00:09:36.517010   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:36.517018   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:36.517030   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:36.517038   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:36.517064   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:36 GMT
	I1101 00:09:36.517425   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:36.714232   30593 request.go:629] Waited for 196.4313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:36.714301   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:36.714308   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:36.714319   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:36.714329   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:36.716978   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:36.716999   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:36.717006   30593 round_trippers.go:580]     Audit-Id: 043fbdbd-3263-4587-9070-be445407c188
	I1101 00:09:36.717012   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:36.717017   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:36.717022   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:36.717027   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:36.717035   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:36 GMT
	I1101 00:09:36.717202   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:37.218413   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:37.218434   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:37.218447   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:37.218453   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:37.222719   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:37.222748   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:37.222759   30593 round_trippers.go:580]     Audit-Id: 917dad8e-af16-42b6-88ae-5dcab424bb1e
	I1101 00:09:37.222768   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:37.222778   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:37.222790   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:37.222802   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:37.222813   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:37 GMT
	I1101 00:09:37.223475   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:37.718082   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:37.718126   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:37.718135   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:37.718141   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:37.721049   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:37.721077   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:37.721088   30593 round_trippers.go:580]     Audit-Id: 06dcc7c1-bdd2-4e9f-870d-80146268aafa
	I1101 00:09:37.721101   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:37.721121   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:37.721130   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:37.721139   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:37.721148   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:37 GMT
	I1101 00:09:37.721272   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:38.218868   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:38.218893   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.218903   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.218912   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.222059   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:38.222083   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.222105   30593 round_trippers.go:580]     Audit-Id: ad14bc98-1add-4a13-8ab1-495ec6575c6e
	I1101 00:09:38.222111   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.222116   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.222121   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.222126   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.222131   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.222638   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1133","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5291 chars]
	I1101 00:09:38.718331   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:38.718356   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.718364   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.718370   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.721280   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:38.721307   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.721314   30593 round_trippers.go:580]     Audit-Id: 32a342cc-ec48-43cc-b0f0-efe6838ba34f
	I1101 00:09:38.721319   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.721324   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.721329   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.721334   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.721339   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.721695   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:38.722003   30593 node_ready.go:49] node "multinode-391061" has status "Ready":"True"
	I1101 00:09:38.722018   30593 node_ready.go:38] duration metric: took 2.255410222s waiting for node "multinode-391061" to be "Ready" ...
	I1101 00:09:38.722030   30593 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:09:38.722093   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:38.722102   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.722113   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.722121   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.726178   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:38.726200   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.726211   30593 round_trippers.go:580]     Audit-Id: d4651bc2-6bb9-4745-9c25-8f2b530c877c
	I1101 00:09:38.726220   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.726227   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.726236   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.726244   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.726253   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.727979   30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1218"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84372 chars]
	I1101 00:09:38.731666   30593 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:38.731777   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:38.731788   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.731797   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.731804   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.734353   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:38.734368   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.734375   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.734380   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.734386   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.734391   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.734396   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.734401   30593 round_trippers.go:580]     Audit-Id: f0f6d35c-893f-4b34-bb39-154e16bedbe1
	I1101 00:09:38.734672   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:38.735183   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:38.735200   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.735208   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.735214   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.737368   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:38.737382   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.737388   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.737393   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.737398   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.737405   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.737418   30593 round_trippers.go:580]     Audit-Id: f978b19f-d984-48d1-b95c-0f850f106969
	I1101 00:09:38.737423   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.737700   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:38.738062   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:38.738078   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.738086   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.738092   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.740363   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:38.740379   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.740385   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.740390   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.740395   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.740408   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.740418   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.740423   30593 round_trippers.go:580]     Audit-Id: c33f3cc3-4753-4832-a887-2f2bce060625
	I1101 00:09:38.740727   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:38.741200   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:38.741213   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:38.741220   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:38.741226   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:38.743369   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:38.743385   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:38.743392   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:38 GMT
	I1101 00:09:38.743397   30593 round_trippers.go:580]     Audit-Id: ccc0a48d-0d10-468a-a49f-71ad3ebd3363
	I1101 00:09:38.743402   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:38.743407   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:38.743414   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:38.743419   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:38.743797   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:39.244680   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:39.244705   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:39.244713   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:39.244719   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:39.249913   30593 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1101 00:09:39.249935   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:39.249943   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:39.249948   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:39.249954   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:39.249959   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:39.249964   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:39 GMT
	I1101 00:09:39.249971   30593 round_trippers.go:580]     Audit-Id: 12d94c73-c75e-46e9-871a-9b74acd630d6
	I1101 00:09:39.250237   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:39.250731   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:39.250745   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:39.250754   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:39.250760   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:39.253732   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:39.253752   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:39.253761   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:39.253770   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:39 GMT
	I1101 00:09:39.253778   30593 round_trippers.go:580]     Audit-Id: 2a48db27-174b-4246-a989-ca7f61b115f9
	I1101 00:09:39.253787   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:39.253793   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:39.253798   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:39.254037   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:39.744690   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:39.744715   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:39.744724   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:39.744729   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:39.748026   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:39.748050   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:39.748060   30593 round_trippers.go:580]     Audit-Id: d31dc218-4603-4f82-a559-2e3697ff06e2
	I1101 00:09:39.748072   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:39.748080   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:39.748087   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:39.748098   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:39.748105   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:39 GMT
	I1101 00:09:39.748732   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:39.749181   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:39.749196   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:39.749206   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:39.749215   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:39.751958   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:39.751980   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:39.751989   30593 round_trippers.go:580]     Audit-Id: b460f490-de79-4762-b30a-6cdd07942ced
	I1101 00:09:39.751997   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:39.752005   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:39.752015   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:39.752021   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:39.752029   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:39 GMT
	I1101 00:09:39.752310   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:40.244413   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:40.244438   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:40.244446   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:40.244452   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:40.248489   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:40.248512   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:40.248521   30593 round_trippers.go:580]     Audit-Id: ccff4954-c9ff-4a7f-9536-aa2b767dc311
	I1101 00:09:40.248528   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:40.248533   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:40.248538   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:40.248544   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:40.248549   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:40 GMT
	I1101 00:09:40.248729   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:40.249180   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:40.249194   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:40.249201   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:40.249209   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:40.252171   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:40.252188   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:40.252194   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:40.252199   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:40.252203   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:40.252208   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:40.252213   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:40 GMT
	I1101 00:09:40.252218   30593 round_trippers.go:580]     Audit-Id: ca95e9f6-880f-4555-aa29-16a66b7bf628
	I1101 00:09:40.252484   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:40.745314   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:40.745341   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:40.745350   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:40.745357   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:40.747878   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:40.747895   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:40.747902   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:40.747910   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:40.747924   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:40 GMT
	I1101 00:09:40.747932   30593 round_trippers.go:580]     Audit-Id: b88089ad-e6cf-4b38-b7fb-da565b4e5c79
	I1101 00:09:40.747940   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:40.747951   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:40.748125   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:40.748587   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:40.748601   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:40.748611   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:40.748617   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:40.750689   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:40.750703   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:40.750710   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:40 GMT
	I1101 00:09:40.750721   30593 round_trippers.go:580]     Audit-Id: 3a208361-9be9-4a15-8f86-f26ff624d9b3
	I1101 00:09:40.750729   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:40.750736   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:40.750744   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:40.750755   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:40.750912   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:40.751208   30593 pod_ready.go:102] pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace has status "Ready":"False"
	I1101 00:09:41.244531   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:41.244555   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:41.244563   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:41.244569   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:41.247236   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:41.247254   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:41.247264   30593 round_trippers.go:580]     Audit-Id: 0a7a1192-7352-4f99-a239-ebbd6ca40e85
	I1101 00:09:41.247272   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:41.247279   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:41.247289   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:41.247298   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:41.247318   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:41 GMT
	I1101 00:09:41.247449   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:41.247870   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:41.247882   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:41.247889   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:41.247894   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:41.250080   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:41.250098   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:41.250104   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:41.250109   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:41 GMT
	I1101 00:09:41.250114   30593 round_trippers.go:580]     Audit-Id: 629d69c5-3174-4a7d-aa0d-8f22f6d5b2f6
	I1101 00:09:41.250130   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:41.250138   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:41.250146   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:41.250326   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:41.745038   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:41.745066   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:41.745074   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:41.745080   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:41.748544   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:41.748570   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:41.748581   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:41.748590   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:41.748598   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:41.748606   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:41 GMT
	I1101 00:09:41.748625   30593 round_trippers.go:580]     Audit-Id: b22bcb01-f5bf-4a1d-aad0-6c0ab2d577d4
	I1101 00:09:41.748637   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:41.748855   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1184","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I1101 00:09:41.749306   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:41.749318   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:41.749325   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:41.749331   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:41.755594   30593 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1101 00:09:41.755639   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:41.755649   30593 round_trippers.go:580]     Audit-Id: a64448a4-caec-4cfe-9700-2fbbc35230d2
	I1101 00:09:41.755657   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:41.755665   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:41.755673   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:41.755680   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:41.755695   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:41 GMT
	I1101 00:09:41.755860   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:42.244432   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dg5w7
	I1101 00:09:42.244456   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.244464   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.244470   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.247204   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:42.247227   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.247238   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.247247   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.247256   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.247267   30593 round_trippers.go:580]     Audit-Id: 003f9883-5c30-40fd-aa1f-88b585473b07
	I1101 00:09:42.247272   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.247278   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.247475   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1232","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I1101 00:09:42.248064   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:42.248082   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.248093   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.248100   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.251135   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:42.251152   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.251158   30593 round_trippers.go:580]     Audit-Id: 1d944e3b-2b90-4cb4-b54e-e4dc8e023493
	I1101 00:09:42.251168   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.251172   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.251177   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.251182   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.251187   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.251385   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:42.251763   30593 pod_ready.go:92] pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:42.251782   30593 pod_ready.go:81] duration metric: took 3.52008861s waiting for pod "coredns-5dd5756b68-dg5w7" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:42.251794   30593 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:42.251868   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-391061
	I1101 00:09:42.251880   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.251891   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.251901   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.253932   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:42.253950   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.253957   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.253962   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.253967   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.253975   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.253980   30593 round_trippers.go:580]     Audit-Id: 8a73d4e8-1e4e-4883-908a-5c09ce62f8c3
	I1101 00:09:42.253985   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.254150   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-391061","namespace":"kube-system","uid":"0537cc4c-2127-4424-b02f-9e4747bc8713","resourceVersion":"1227","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.43:2379","kubernetes.io/config.hash":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.mirror":"3983ae368fa449f28180d36143aa3911","kubernetes.io/config.seen":"2023-11-01T00:02:21.059094445Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6072 chars]
	I1101 00:09:42.254640   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:42.254655   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.254674   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.254685   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.256694   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:42.256708   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.256715   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.256723   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.256731   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.256740   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.256749   30593 round_trippers.go:580]     Audit-Id: 4c1b620e-fff1-4494-89d2-83c513fc0fc0
	I1101 00:09:42.256757   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.256951   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:42.257268   30593 pod_ready.go:92] pod "etcd-multinode-391061" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:42.257283   30593 pod_ready.go:81] duration metric: took 5.477797ms waiting for pod "etcd-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:42.257306   30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:42.257369   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:42.257379   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.257390   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.257399   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.259467   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:42.259483   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.259492   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.259499   30593 round_trippers.go:580]     Audit-Id: 05d95e16-1d4e-4f81-a9d5-b2b141ff765d
	I1101 00:09:42.259508   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.259517   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.259526   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.259535   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.259733   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:42.260255   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:42.260274   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.260281   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.260287   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.262250   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:42.262265   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.262275   30593 round_trippers.go:580]     Audit-Id: ff748f0c-35a9-4061-b5ed-b0472309e27b
	I1101 00:09:42.262282   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.262290   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.262298   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.262310   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.262318   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.262580   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:42.314176   30593 request.go:629] Waited for 51.260114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:42.314237   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:42.314242   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.314249   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.314256   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.317908   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:42.317937   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.317948   30593 round_trippers.go:580]     Audit-Id: fa52f436-6e2b-418e-972d-6b4c1f1c0fcb
	I1101 00:09:42.317957   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.317966   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.317971   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.317976   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.317984   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.318154   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:42.514148   30593 request.go:629] Waited for 195.42483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:42.514213   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:42.514221   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:42.514235   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:42.514291   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:42.516991   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:42.517017   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:42.517026   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:42.517035   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:42.517044   30593 round_trippers.go:580]     Audit-Id: 71439942-ddcd-4159-8952-4d34c7b14582
	I1101 00:09:42.517052   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:42.517059   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:42.517068   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:42.517221   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:43.018410   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:43.018439   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:43.018449   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:43.018459   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:43.021587   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:43.021609   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:43.021616   30593 round_trippers.go:580]     Audit-Id: 7c4f42ca-82c7-4601-9dd3-7fa193eec32f
	I1101 00:09:43.021621   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:43.021626   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:43.021631   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:43.021636   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:43.021642   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:43.021917   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:43.022342   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:43.022357   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:43.022368   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:43.022376   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:43.025247   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:43.025262   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:43.025268   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:43.025280   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:43.025289   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:43.025298   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:43.025310   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:42 GMT
	I1101 00:09:43.025316   30593 round_trippers.go:580]     Audit-Id: a4d1586f-de58-43b9-93f2-43b9726b8133
	I1101 00:09:43.025864   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:43.518711   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:43.518737   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:43.518746   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:43.518752   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:43.521991   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:43.522017   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:43.522027   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:43.522036   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:43.522044   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:43 GMT
	I1101 00:09:43.522058   30593 round_trippers.go:580]     Audit-Id: ee145f23-1a35-4e40-acd4-1b329858fdfd
	I1101 00:09:43.522065   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:43.522076   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:43.522321   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:43.522816   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:43.522832   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:43.522839   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:43.522845   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:43.525300   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:43.525321   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:43.525329   30593 round_trippers.go:580]     Audit-Id: a16446ac-4c9e-462b-a604-37ce52442eb5
	I1101 00:09:43.525336   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:43.525344   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:43.525351   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:43.525358   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:43.525365   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:43 GMT
	I1101 00:09:43.525589   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:44.018504   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:44.018526   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:44.018534   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:44.018539   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:44.021345   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:44.021368   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:44.021379   30593 round_trippers.go:580]     Audit-Id: 23afddaf-e391-4a40-9206-ba5a97021cd1
	I1101 00:09:44.021389   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:44.021397   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:44.021402   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:44.021408   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:44.021413   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:43 GMT
	I1101 00:09:44.021781   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:44.022178   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:44.022191   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:44.022201   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:44.022206   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:44.024358   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:44.024374   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:44.024380   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:44.024385   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:44.024390   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:44.024395   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:43 GMT
	I1101 00:09:44.024400   30593 round_trippers.go:580]     Audit-Id: 10d30ea6-f2a4-4468-b8d9-fe4d25cd5e9a
	I1101 00:09:44.024404   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:44.024539   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:44.518209   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:44.518235   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:44.518243   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:44.518249   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:44.521184   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:44.521208   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:44.521218   30593 round_trippers.go:580]     Audit-Id: fc8c6383-2699-422a-8176-ddcab44a9a9c
	I1101 00:09:44.521238   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:44.521246   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:44.521255   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:44.521264   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:44.521273   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:44 GMT
	I1101 00:09:44.521459   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:44.521894   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:44.521907   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:44.521914   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:44.521920   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:44.524063   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:44.524079   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:44.524085   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:44 GMT
	I1101 00:09:44.524135   30593 round_trippers.go:580]     Audit-Id: e14e26a5-28ca-4d3f-bae4-eea46c9e3a5b
	I1101 00:09:44.524159   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:44.524167   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:44.524177   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:44.524182   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:44.524354   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:44.524642   30593 pod_ready.go:102] pod "kube-apiserver-multinode-391061" in "kube-system" namespace has status "Ready":"False"
	I1101 00:09:45.017778   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:45.017807   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.017815   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.017822   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.021073   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:45.021103   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.021114   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.021124   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.021133   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.021142   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.021151   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:44 GMT
	I1101 00:09:45.021160   30593 round_trippers.go:580]     Audit-Id: 0dd2be34-8929-487b-8348-a144ffa6b941
	I1101 00:09:45.021400   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1182","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
	I1101 00:09:45.021872   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:45.021889   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.021897   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.021908   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.024844   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:45.024865   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.024874   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.024882   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.024889   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:44 GMT
	I1101 00:09:45.024897   30593 round_trippers.go:580]     Audit-Id: db32154e-ea80-4382-b7a1-53821506f75f
	I1101 00:09:45.024905   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.024912   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.025668   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:45.518404   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-391061
	I1101 00:09:45.518429   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.518437   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.518442   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.521045   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:45.521065   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.521072   30593 round_trippers.go:580]     Audit-Id: 32e5cb3c-6d81-4568-831d-7a0dc39dbca2
	I1101 00:09:45.521077   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.521088   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.521093   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.521098   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.521103   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.521484   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-391061","namespace":"kube-system","uid":"dff82899-3db2-46a2-aea0-ec57d58be1c8","resourceVersion":"1242","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.43:8443","kubernetes.io/config.hash":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.mirror":"b1b3f1e5d8276558ad5f45ab6c7fece5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059087592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7607 chars]
	I1101 00:09:45.521900   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:45.521917   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.521924   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.521929   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.524067   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:45.524082   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.524088   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.524096   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.524104   30593 round_trippers.go:580]     Audit-Id: 31736dc5-73c3-44fb-9ab2-5a9f73f0e730
	I1101 00:09:45.524113   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.524121   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.524130   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.524429   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:45.524707   30593 pod_ready.go:92] pod "kube-apiserver-multinode-391061" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:45.524722   30593 pod_ready.go:81] duration metric: took 3.267408141s waiting for pod "kube-apiserver-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:45.524730   30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:45.524780   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-391061
	I1101 00:09:45.524789   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.524796   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.524801   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.526609   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:45.526623   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.526629   30593 round_trippers.go:580]     Audit-Id: c91e4f63-f1b9-4d99-b2a0-1ae44d4e3920
	I1101 00:09:45.526634   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.526639   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.526644   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.526649   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.526654   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.526976   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-391061","namespace":"kube-system","uid":"4775e566-6acd-43ac-b7cd-8dbd245c33cf","resourceVersion":"1240","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.mirror":"129a8ea77cdb10a9dd895cecf9b472c5","kubernetes.io/config.seen":"2023-11-01T00:02:21.059092388Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I1101 00:09:45.527354   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:45.527366   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.527373   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.527379   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.529038   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:45.529053   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.529064   30593 round_trippers.go:580]     Audit-Id: 6d668043-98c8-4c98-9b23-07c7419995e3
	I1101 00:09:45.529069   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.529074   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.529079   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.529084   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.529089   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.529310   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:45.529599   30593 pod_ready.go:92] pod "kube-controller-manager-multinode-391061" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:45.529612   30593 pod_ready.go:81] duration metric: took 4.877104ms waiting for pod "kube-controller-manager-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:45.529629   30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:45.529698   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clsrp
	I1101 00:09:45.529709   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.529717   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.529727   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.531667   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:45.531685   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.531694   30593 round_trippers.go:580]     Audit-Id: 179e6548-b6dd-4972-8941-597dc0f20790
	I1101 00:09:45.531703   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.531718   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.531724   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.531731   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.531737   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.532195   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-clsrp","generateName":"kube-proxy-","namespace":"kube-system","uid":"a747b091-d679-4ae6-a995-c980235c9a61","resourceVersion":"1203","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
	I1101 00:09:45.713849   30593 request.go:629] Waited for 181.057235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:45.713909   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:45.713914   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.713921   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.713927   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.716619   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:45.716637   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.716643   30593 round_trippers.go:580]     Audit-Id: 426c242f-3496-4e53-8631-c1189b21932f
	I1101 00:09:45.716649   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.716657   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.716665   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.716677   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.716689   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.716889   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:45.717308   30593 pod_ready.go:92] pod "kube-proxy-clsrp" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:45.717325   30593 pod_ready.go:81] duration metric: took 187.686843ms waiting for pod "kube-proxy-clsrp" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:45.717337   30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:45.914796   30593 request.go:629] Waited for 197.399239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
	I1101 00:09:45.914852   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rcnv9
	I1101 00:09:45.914857   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:45.914864   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:45.914871   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:45.917416   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:45.917445   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:45.917454   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:45 GMT
	I1101 00:09:45.917462   30593 round_trippers.go:580]     Audit-Id: 9cba40f3-3ad3-42a3-b93f-aa9cc6fc7dd3
	I1101 00:09:45.917475   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:45.917480   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:45.917486   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:45.917492   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:45.917704   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rcnv9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9","resourceVersion":"983","creationTimestamp":"2023-11-01T00:03:22Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:03:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5749 chars]
	I1101 00:09:46.114598   30593 request.go:629] Waited for 196.375687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
	I1101 00:09:46.114664   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m02
	I1101 00:09:46.114691   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:46.114704   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:46.114710   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:46.117340   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:46.117362   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:46.117371   30593 round_trippers.go:580]     Audit-Id: fc111c34-c570-4e3f-9832-d982a0432bc7
	I1101 00:09:46.117379   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:46.117388   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:46.117396   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:46.117408   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:46.117421   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:46 GMT
	I1101 00:09:46.117518   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061-m02","uid":"75fe164a-6fd6-4525-bacf-d792a509255b","resourceVersion":"999","creationTimestamp":"2023-11-01T00:07:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3253 chars]
	I1101 00:09:46.117775   30593 pod_ready.go:92] pod "kube-proxy-rcnv9" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:46.117792   30593 pod_ready.go:81] duration metric: took 400.44672ms waiting for pod "kube-proxy-rcnv9" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:46.117804   30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:46.314248   30593 request.go:629] Waited for 196.387545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
	I1101 00:09:46.314341   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vdjh2
	I1101 00:09:46.314358   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:46.314369   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:46.314378   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:46.317400   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:46.317420   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:46.317429   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:46.317437   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:46 GMT
	I1101 00:09:46.317445   30593 round_trippers.go:580]     Audit-Id: feb64aac-545a-4487-be55-41e7c0e9ef0c
	I1101 00:09:46.317454   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:46.317463   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:46.317473   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:46.317739   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vdjh2","generateName":"kube-proxy-","namespace":"kube-system","uid":"9838a111-09e4-4975-b925-1ae5dcfa7334","resourceVersion":"1096","creationTimestamp":"2023-11-01T00:04:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4c7befcb-672f-40b1-8090-82e04bd6e1a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c7befcb-672f-40b1-8090-82e04bd6e1a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1101 00:09:46.514556   30593 request.go:629] Waited for 196.355467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
	I1101 00:09:46.514623   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061-m03
	I1101 00:09:46.514630   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:46.514642   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:46.514652   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:46.517667   30593 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1101 00:09:46.517686   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:46.517695   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:46 GMT
	I1101 00:09:46.517703   30593 round_trippers.go:580]     Audit-Id: dee8bed2-39ff-4ddf-9b35-2afcacefb08c
	I1101 00:09:46.517710   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:46.517717   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:46.517725   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:46.517732   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:46.517743   30593 round_trippers.go:580]     Content-Length: 210
	I1101 00:09:46.517769   30593 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-391061-m03\" not found","reason":"NotFound","details":{"name":"multinode-391061-m03","kind":"nodes"},"code":404}
	I1101 00:09:46.517879   30593 pod_ready.go:97] node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
	I1101 00:09:46.517896   30593 pod_ready.go:81] duration metric: took 400.083902ms waiting for pod "kube-proxy-vdjh2" in "kube-system" namespace to be "Ready" ...
	E1101 00:09:46.517909   30593 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-391061-m03" hosting pod "kube-proxy-vdjh2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-391061-m03": nodes "multinode-391061-m03" not found
	I1101 00:09:46.517918   30593 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:46.714359   30593 request.go:629] Waited for 196.368032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:46.714428   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:46.714439   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:46.714450   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:46.714460   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:46.717601   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:46.717622   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:46.717631   30593 round_trippers.go:580]     Audit-Id: b10ec514-fb68-4eb7-a82b-478bb7b2615a
	I1101 00:09:46.717638   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:46.717646   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:46.717653   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:46.717660   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:46.717669   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:46 GMT
	I1101 00:09:46.718240   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1187","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I1101 00:09:46.913939   30593 request.go:629] Waited for 195.310235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:46.913993   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:46.913998   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:46.914005   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:46.914018   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:46.916550   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:46.916574   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:46.916590   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:46.916598   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:46.916605   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:46.916613   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:46 GMT
	I1101 00:09:46.916622   30593 round_trippers.go:580]     Audit-Id: 3fdb3127-adb6-4b1b-973b-56d6f01c7510
	I1101 00:09:46.916635   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:46.916797   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:47.114664   30593 request.go:629] Waited for 197.399091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:47.114755   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:47.114767   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:47.114785   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:47.114799   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:47.117780   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:47.117799   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:47.117806   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:47.117812   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:47.117817   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:47.117822   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:47 GMT
	I1101 00:09:47.117827   30593 round_trippers.go:580]     Audit-Id: 88a0065a-7184-46f2-bd0b-8a0b89e70b44
	I1101 00:09:47.117841   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:47.118061   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1187","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I1101 00:09:47.313739   30593 request.go:629] Waited for 195.316992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:47.313819   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:47.313832   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:47.313850   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:47.313863   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:47.317452   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:47.317480   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:47.317490   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:47.317498   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:47.317506   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:47.317514   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:47.317522   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:47 GMT
	I1101 00:09:47.317530   30593 round_trippers.go:580]     Audit-Id: 2e316d17-f6a0-43df-b21e-ef5ee4396440
	I1101 00:09:47.317759   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:47.818890   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-391061
	I1101 00:09:47.818917   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:47.818925   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:47.818932   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:47.821524   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:47.821546   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:47.821558   30593 round_trippers.go:580]     Audit-Id: 50ab8a02-fab8-41d2-abe4-e6fa324b51f1
	I1101 00:09:47.821566   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:47.821574   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:47.821582   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:47.821590   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:47.821600   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:47 GMT
	I1101 00:09:47.822014   30593 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-391061","namespace":"kube-system","uid":"eaf767ff-8f68-4b91-bcd7-b550481a6155","resourceVersion":"1244","creationTimestamp":"2023-11-01T00:02:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.mirror":"5717ff99bbf840076eebaffea5e26d86","kubernetes.io/config.seen":"2023-11-01T00:02:21.059093363Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I1101 00:09:47.822399   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes/multinode-391061
	I1101 00:09:47.822414   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:47.822432   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:47.822440   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:47.825524   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:47.825549   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:47.825559   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:47.825568   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:47 GMT
	I1101 00:09:47.825576   30593 round_trippers.go:580]     Audit-Id: cff53b13-6010-47a4-94a7-bfaa8a544728
	I1101 00:09:47.825584   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:47.825592   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:47.825600   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:47.825781   30593 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2023-11-01T00:02:17Z","fieldsType":"Field [truncated 5164 chars]
	I1101 00:09:47.826104   30593 pod_ready.go:92] pod "kube-scheduler-multinode-391061" in "kube-system" namespace has status "Ready":"True"
	I1101 00:09:47.826120   30593 pod_ready.go:81] duration metric: took 1.308189456s waiting for pod "kube-scheduler-multinode-391061" in "kube-system" namespace to be "Ready" ...
	I1101 00:09:47.826129   30593 pod_ready.go:38] duration metric: took 9.10408386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:09:47.826150   30593 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:09:47.826195   30593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:09:47.838151   30593 command_runner.go:130] > 1704
	I1101 00:09:47.838274   30593 api_server.go:72] duration metric: took 11.499995093s to wait for apiserver process to appear ...
	I1101 00:09:47.838293   30593 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:09:47.838314   30593 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:09:47.844117   30593 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I1101 00:09:47.844194   30593 round_trippers.go:463] GET https://192.168.39.43:8443/version
	I1101 00:09:47.844207   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:47.844218   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:47.844226   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:47.845412   30593 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 00:09:47.845425   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:47.845431   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:47.845436   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:47.845442   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:47.845450   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:47.845463   30593 round_trippers.go:580]     Content-Length: 264
	I1101 00:09:47.845475   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:47 GMT
	I1101 00:09:47.845485   30593 round_trippers.go:580]     Audit-Id: 1468702f-2934-4914-b020-c0a4990038b1
	I1101 00:09:47.845504   30593 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1101 00:09:47.845540   30593 api_server.go:141] control plane version: v1.28.3
	I1101 00:09:47.845552   30593 api_server.go:131] duration metric: took 7.252944ms to wait for apiserver health ...
	I1101 00:09:47.845562   30593 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:09:47.913821   30593 request.go:629] Waited for 68.174041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:47.913881   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:47.913885   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:47.913893   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:47.913899   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:47.918202   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:47.918230   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:47.918239   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:47.918248   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:47.918254   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:47 GMT
	I1101 00:09:47.918259   30593 round_trippers.go:580]     Audit-Id: b30ccebe-8256-4a7d-a462-7b4e1d0cdfa8
	I1101 00:09:47.918264   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:47.918269   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:47.920031   30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1232","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83346 chars]
	I1101 00:09:47.922413   30593 system_pods.go:59] 12 kube-system pods found
	I1101 00:09:47.922434   30593 system_pods.go:61] "coredns-5dd5756b68-dg5w7" [eb94555e-1465-4dec-9d6d-ebcbec02841e] Running
	I1101 00:09:47.922438   30593 system_pods.go:61] "etcd-multinode-391061" [0537cc4c-2127-4424-b02f-9e4747bc8713] Running
	I1101 00:09:47.922442   30593 system_pods.go:61] "kindnet-4jfj9" [2559e20b-85cf-43d5-8663-7ec855d71df9] Running
	I1101 00:09:47.922446   30593 system_pods.go:61] "kindnet-lcljq" [171d5f22-d781-4224-88f7-f940ad9e747b] Running
	I1101 00:09:47.922450   30593 system_pods.go:61] "kindnet-wrdhd" [85db010e-82bd-4efa-a760-0669bf1e52de] Running
	I1101 00:09:47.922454   30593 system_pods.go:61] "kube-apiserver-multinode-391061" [dff82899-3db2-46a2-aea0-ec57d58be1c8] Running
	I1101 00:09:47.922458   30593 system_pods.go:61] "kube-controller-manager-multinode-391061" [4775e566-6acd-43ac-b7cd-8dbd245c33cf] Running
	I1101 00:09:47.922462   30593 system_pods.go:61] "kube-proxy-clsrp" [a747b091-d679-4ae6-a995-c980235c9a61] Running
	I1101 00:09:47.922465   30593 system_pods.go:61] "kube-proxy-rcnv9" [9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9] Running
	I1101 00:09:47.922476   30593 system_pods.go:61] "kube-proxy-vdjh2" [9838a111-09e4-4975-b925-1ae5dcfa7334] Running
	I1101 00:09:47.922481   30593 system_pods.go:61] "kube-scheduler-multinode-391061" [eaf767ff-8f68-4b91-bcd7-b550481a6155] Running
	I1101 00:09:47.922485   30593 system_pods.go:61] "storage-provisioner" [b0b970e9-7d0b-4e94-8ca8-2f3348eaf579] Running
	I1101 00:09:47.922492   30593 system_pods.go:74] duration metric: took 76.924582ms to wait for pod list to return data ...
	I1101 00:09:47.922513   30593 default_sa.go:34] waiting for default service account to be created ...
	I1101 00:09:48.113860   30593 request.go:629] Waited for 191.269729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
	I1101 00:09:48.113931   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/default/serviceaccounts
	I1101 00:09:48.113936   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:48.113943   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:48.113949   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:48.117152   30593 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 00:09:48.117173   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:48.117179   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:48.117184   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:48.117189   30593 round_trippers.go:580]     Content-Length: 262
	I1101 00:09:48.117194   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:48 GMT
	I1101 00:09:48.117199   30593 round_trippers.go:580]     Audit-Id: cf19f0f1-599a-4c01-a817-75c7ba89021a
	I1101 00:09:48.117204   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:48.117209   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:48.117226   30593 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"331ecfcc-8852-4250-85c2-da77e5b314fe","resourceVersion":"364","creationTimestamp":"2023-11-01T00:02:33Z"}}]}
	I1101 00:09:48.117391   30593 default_sa.go:45] found service account: "default"
	I1101 00:09:48.117408   30593 default_sa.go:55] duration metric: took 194.889894ms for default service account to be created ...
	I1101 00:09:48.117415   30593 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 00:09:48.313818   30593 request.go:629] Waited for 196.325558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:48.313881   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/namespaces/kube-system/pods
	I1101 00:09:48.313886   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:48.313893   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:48.313899   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:48.317985   30593 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 00:09:48.318004   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:48.318011   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:48.318018   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:48 GMT
	I1101 00:09:48.318027   30593 round_trippers.go:580]     Audit-Id: 7b682312-a373-4aac-a928-19f0e9f08ce4
	I1101 00:09:48.318035   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:48.318042   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:48.318051   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:48.319258   30593 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dg5w7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"eb94555e-1465-4dec-9d6d-ebcbec02841e","resourceVersion":"1232","creationTimestamp":"2023-11-01T00:02:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b8b40d79-8b5c-4662-975a-410885a71c32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T00:02:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8b40d79-8b5c-4662-975a-410885a71c32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83346 chars]
	I1101 00:09:48.321698   30593 system_pods.go:86] 12 kube-system pods found
	I1101 00:09:48.321724   30593 system_pods.go:89] "coredns-5dd5756b68-dg5w7" [eb94555e-1465-4dec-9d6d-ebcbec02841e] Running
	I1101 00:09:48.321729   30593 system_pods.go:89] "etcd-multinode-391061" [0537cc4c-2127-4424-b02f-9e4747bc8713] Running
	I1101 00:09:48.321733   30593 system_pods.go:89] "kindnet-4jfj9" [2559e20b-85cf-43d5-8663-7ec855d71df9] Running
	I1101 00:09:48.321739   30593 system_pods.go:89] "kindnet-lcljq" [171d5f22-d781-4224-88f7-f940ad9e747b] Running
	I1101 00:09:48.321743   30593 system_pods.go:89] "kindnet-wrdhd" [85db010e-82bd-4efa-a760-0669bf1e52de] Running
	I1101 00:09:48.321747   30593 system_pods.go:89] "kube-apiserver-multinode-391061" [dff82899-3db2-46a2-aea0-ec57d58be1c8] Running
	I1101 00:09:48.321752   30593 system_pods.go:89] "kube-controller-manager-multinode-391061" [4775e566-6acd-43ac-b7cd-8dbd245c33cf] Running
	I1101 00:09:48.321756   30593 system_pods.go:89] "kube-proxy-clsrp" [a747b091-d679-4ae6-a995-c980235c9a61] Running
	I1101 00:09:48.321762   30593 system_pods.go:89] "kube-proxy-rcnv9" [9b65a6f4-4c34-40e5-a5bd-aedfc335cbc9] Running
	I1101 00:09:48.321765   30593 system_pods.go:89] "kube-proxy-vdjh2" [9838a111-09e4-4975-b925-1ae5dcfa7334] Running
	I1101 00:09:48.321772   30593 system_pods.go:89] "kube-scheduler-multinode-391061" [eaf767ff-8f68-4b91-bcd7-b550481a6155] Running
	I1101 00:09:48.321777   30593 system_pods.go:89] "storage-provisioner" [b0b970e9-7d0b-4e94-8ca8-2f3348eaf579] Running
	I1101 00:09:48.321785   30593 system_pods.go:126] duration metric: took 204.365858ms to wait for k8s-apps to be running ...
	I1101 00:09:48.321794   30593 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 00:09:48.321835   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:09:48.334581   30593 system_svc.go:56] duration metric: took 12.775415ms WaitForService to wait for kubelet.
	I1101 00:09:48.334608   30593 kubeadm.go:581] duration metric: took 11.996332779s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 00:09:48.334634   30593 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:09:48.514065   30593 request.go:629] Waited for 179.367734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.43:8443/api/v1/nodes
	I1101 00:09:48.514131   30593 round_trippers.go:463] GET https://192.168.39.43:8443/api/v1/nodes
	I1101 00:09:48.514136   30593 round_trippers.go:469] Request Headers:
	I1101 00:09:48.514144   30593 round_trippers.go:473]     Accept: application/json, */*
	I1101 00:09:48.514150   30593 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1101 00:09:48.517017   30593 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 00:09:48.517036   30593 round_trippers.go:577] Response Headers:
	I1101 00:09:48.517043   30593 round_trippers.go:580]     Audit-Id: acbda546-1395-4e94-a808-39a73ef2e8e6
	I1101 00:09:48.517057   30593 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 00:09:48.517063   30593 round_trippers.go:580]     Content-Type: application/json
	I1101 00:09:48.517070   30593 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: efa313dd-6ab9-4168-b89a-a71a43556a6c
	I1101 00:09:48.517077   30593 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24bfcdbf-4ab4-4ab8-b75a-c15db6211e04
	I1101 00:09:48.517087   30593 round_trippers.go:580]     Date: Wed, 01 Nov 2023 00:09:48 GMT
	I1101 00:09:48.517358   30593 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"multinode-391061","uid":"89150ca4-1b08-4c36-b7e2-214a39d89c72","resourceVersion":"1218","creationTimestamp":"2023-11-01T00:02:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-391061","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-391061","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T00_02_22_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 9463 chars]
	I1101 00:09:48.517853   30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:09:48.517873   30593 node_conditions.go:123] node cpu capacity is 2
	I1101 00:09:48.517883   30593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:09:48.517888   30593 node_conditions.go:123] node cpu capacity is 2
	I1101 00:09:48.517892   30593 node_conditions.go:105] duration metric: took 183.255117ms to run NodePressure ...
	I1101 00:09:48.517902   30593 start.go:228] waiting for startup goroutines ...
	I1101 00:09:48.517913   30593 start.go:233] waiting for cluster config update ...
	I1101 00:09:48.517918   30593 start.go:242] writing updated cluster config ...
	I1101 00:09:48.518328   30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:09:48.518400   30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
	I1101 00:09:48.521532   30593 out.go:177] * Starting worker node multinode-391061-m02 in cluster multinode-391061
	I1101 00:09:48.522898   30593 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1101 00:09:48.522933   30593 cache.go:56] Caching tarball of preloaded images
	I1101 00:09:48.523028   30593 preload.go:174] Found /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 00:09:48.523039   30593 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1101 00:09:48.523130   30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
	I1101 00:09:48.523306   30593 start.go:365] acquiring machines lock for multinode-391061-m02: {Name:mkd250049361a5d831a3d31c273569334737e54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:09:48.523347   30593 start.go:369] acquired machines lock for "multinode-391061-m02" in 23.277µs
	I1101 00:09:48.523360   30593 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:09:48.523365   30593 fix.go:54] fixHost starting: m02
	I1101 00:09:48.523626   30593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:09:48.523657   30593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:09:48.538023   30593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I1101 00:09:48.538553   30593 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:09:48.539008   30593 main.go:141] libmachine: Using API Version  1
	I1101 00:09:48.539038   30593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:09:48.539380   30593 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:09:48.539558   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:09:48.539763   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetState
	I1101 00:09:48.541362   30593 fix.go:102] recreateIfNeeded on multinode-391061-m02: state=Stopped err=<nil>
	I1101 00:09:48.541381   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	W1101 00:09:48.541559   30593 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:09:48.543776   30593 out.go:177] * Restarting existing kvm2 VM for "multinode-391061-m02" ...
	I1101 00:09:48.545357   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .Start
	I1101 00:09:48.545519   30593 main.go:141] libmachine: (multinode-391061-m02) Ensuring networks are active...
	I1101 00:09:48.546142   30593 main.go:141] libmachine: (multinode-391061-m02) Ensuring network default is active
	I1101 00:09:48.546521   30593 main.go:141] libmachine: (multinode-391061-m02) Ensuring network mk-multinode-391061 is active
	I1101 00:09:48.546910   30593 main.go:141] libmachine: (multinode-391061-m02) Getting domain xml...
	I1101 00:09:48.547503   30593 main.go:141] libmachine: (multinode-391061-m02) Creating domain...
	I1101 00:09:49.771823   30593 main.go:141] libmachine: (multinode-391061-m02) Waiting to get IP...
	I1101 00:09:49.772640   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:49.773071   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:49.773175   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:49.773074   30847 retry.go:31] will retry after 274.263244ms: waiting for machine to come up
	I1101 00:09:50.048692   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:50.049124   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:50.049162   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:50.049076   30847 retry.go:31] will retry after 372.692246ms: waiting for machine to come up
	I1101 00:09:50.423723   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:50.424163   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:50.424198   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:50.424109   30847 retry.go:31] will retry after 328.806363ms: waiting for machine to come up
	I1101 00:09:50.754813   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:50.755280   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:50.755299   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:50.755254   30847 retry.go:31] will retry after 486.547371ms: waiting for machine to come up
	I1101 00:09:51.243022   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:51.243428   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:51.243451   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:51.243379   30847 retry.go:31] will retry after 524.248371ms: waiting for machine to come up
	I1101 00:09:51.769198   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:51.769648   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:51.769689   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:51.769606   30847 retry.go:31] will retry after 931.47967ms: waiting for machine to come up
	I1101 00:09:52.703177   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:52.703627   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:52.703656   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:52.703550   30847 retry.go:31] will retry after 962.96473ms: waiting for machine to come up
	I1101 00:09:53.668096   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:53.668562   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:53.668584   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:53.668516   30847 retry.go:31] will retry after 926.464487ms: waiting for machine to come up
	I1101 00:09:54.596589   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:54.596929   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:54.596953   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:54.596883   30847 retry.go:31] will retry after 1.199020855s: waiting for machine to come up
	I1101 00:09:55.797189   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:55.797717   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:55.797748   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:55.797665   30847 retry.go:31] will retry after 1.98043569s: waiting for machine to come up
	I1101 00:09:57.780876   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:09:57.781471   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:09:57.781502   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:09:57.781409   30847 retry.go:31] will retry after 2.601288069s: waiting for machine to come up
	I1101 00:10:00.385745   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:00.386332   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:10:00.386369   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:10:00.386242   30847 retry.go:31] will retry after 2.239008923s: waiting for machine to come up
	I1101 00:10:02.627577   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:02.627955   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | unable to find current IP address of domain multinode-391061-m02 in network mk-multinode-391061
	I1101 00:10:02.627983   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | I1101 00:10:02.627920   30847 retry.go:31] will retry after 3.415765053s: waiting for machine to come up
	I1101 00:10:06.046739   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.047249   30593 main.go:141] libmachine: (multinode-391061-m02) Found IP for machine: 192.168.39.249
	I1101 00:10:06.047290   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has current primary IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.047305   30593 main.go:141] libmachine: (multinode-391061-m02) Reserving static IP address...
	I1101 00:10:06.047763   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "multinode-391061-m02", mac: "52:54:00:f1:1a:84", ip: "192.168.39.249"} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.047790   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | skip adding static IP to network mk-multinode-391061 - found existing host DHCP lease matching {name: "multinode-391061-m02", mac: "52:54:00:f1:1a:84", ip: "192.168.39.249"}
	I1101 00:10:06.047800   30593 main.go:141] libmachine: (multinode-391061-m02) Reserved static IP address: 192.168.39.249
	I1101 00:10:06.047814   30593 main.go:141] libmachine: (multinode-391061-m02) Waiting for SSH to be available...
	I1101 00:10:06.047824   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | Getting to WaitForSSH function...
	I1101 00:10:06.049673   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.050046   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.050081   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.050222   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | Using SSH client type: external
	I1101 00:10:06.050261   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa (-rw-------)
	I1101 00:10:06.050300   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 00:10:06.050322   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | About to run SSH command:
	I1101 00:10:06.050339   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | exit 0
	I1101 00:10:06.146337   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | SSH cmd err, output: <nil>: 
	I1101 00:10:06.146696   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetConfigRaw
	I1101 00:10:06.147450   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
	I1101 00:10:06.149870   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.150236   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.150267   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.150541   30593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/multinode-391061/config.json ...
	I1101 00:10:06.150763   30593 machine.go:88] provisioning docker machine ...
	I1101 00:10:06.150786   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:06.150984   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetMachineName
	I1101 00:10:06.151140   30593 buildroot.go:166] provisioning hostname "multinode-391061-m02"
	I1101 00:10:06.151161   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetMachineName
	I1101 00:10:06.151315   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:06.153372   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.153742   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.153790   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.153926   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:06.154158   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.154347   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.154535   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:06.154739   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:10:06.155162   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1101 00:10:06.155179   30593 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-391061-m02 && echo "multinode-391061-m02" | sudo tee /etc/hostname
	I1101 00:10:06.302682   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-391061-m02
	
	I1101 00:10:06.302715   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:06.305443   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.305857   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.305883   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.306094   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:06.306306   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.306521   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.306659   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:06.306805   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:10:06.307269   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1101 00:10:06.307298   30593 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-391061-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-391061-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-391061-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:10:06.448087   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:10:06.448122   30593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7251/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7251/.minikube}
	I1101 00:10:06.448143   30593 buildroot.go:174] setting up certificates
	I1101 00:10:06.448153   30593 provision.go:83] configureAuth start
	I1101 00:10:06.448163   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetMachineName
	I1101 00:10:06.448466   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
	I1101 00:10:06.451196   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.451596   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.451627   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.451812   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:06.453965   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.454286   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.454315   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.454535   30593 provision.go:138] copyHostCerts
	I1101 00:10:06.454570   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
	I1101 00:10:06.454601   30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem, removing ...
	I1101 00:10:06.454610   30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
	I1101 00:10:06.454674   30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem (1082 bytes)
	I1101 00:10:06.454748   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
	I1101 00:10:06.454767   30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem, removing ...
	I1101 00:10:06.454773   30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
	I1101 00:10:06.454796   30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem (1123 bytes)
	I1101 00:10:06.454836   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
	I1101 00:10:06.454852   30593 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem, removing ...
	I1101 00:10:06.454858   30593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
	I1101 00:10:06.454876   30593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem (1675 bytes)
	I1101 00:10:06.454920   30593 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem org=jenkins.multinode-391061-m02 san=[192.168.39.249 192.168.39.249 localhost 127.0.0.1 minikube multinode-391061-m02]
	I1101 00:10:06.568585   30593 provision.go:172] copyRemoteCerts
	I1101 00:10:06.568638   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:10:06.568659   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:06.571150   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.571450   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.571479   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.571687   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:06.571874   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.572047   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:06.572186   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
	I1101 00:10:06.667838   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 00:10:06.667924   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 00:10:06.689930   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 00:10:06.689995   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1101 00:10:06.712213   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 00:10:06.712292   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:10:06.733879   30593 provision.go:86] duration metric: configureAuth took 285.714663ms
	I1101 00:10:06.733904   30593 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:10:06.734094   30593 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:10:06.734113   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:06.734377   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:06.736917   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.737314   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.737348   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.737503   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:06.737692   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.737870   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.738014   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:06.738189   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:10:06.738528   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1101 00:10:06.738541   30593 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 00:10:06.871826   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1101 00:10:06.871854   30593 buildroot.go:70] root file system type: tmpfs
	I1101 00:10:06.872006   30593 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 00:10:06.872036   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:06.874568   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.874916   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:06.874940   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:06.875118   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:06.875315   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.875468   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:06.875569   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:06.875698   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:10:06.876002   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1101 00:10:06.876075   30593 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.43"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 00:10:07.020165   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.43
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 00:10:07.020194   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:07.022769   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:07.023132   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:07.023159   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:07.023341   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:07.023522   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:07.023707   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:07.023843   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:07.023996   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:10:07.024324   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1101 00:10:07.024341   30593 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 00:10:07.865650   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1101 00:10:07.865678   30593 machine.go:91] provisioned docker machine in 1.714900545s
	I1101 00:10:07.865693   30593 start.go:300] post-start starting for "multinode-391061-m02" (driver="kvm2")
	I1101 00:10:07.865707   30593 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:10:07.865730   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:07.866051   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:10:07.866082   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:07.868728   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:07.869111   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:07.869135   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:07.869295   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:07.869516   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:07.869672   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:07.869814   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
	I1101 00:10:07.964822   30593 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:10:07.968645   30593 command_runner.go:130] > NAME=Buildroot
	I1101 00:10:07.968665   30593 command_runner.go:130] > VERSION=2021.02.12-1-g0cee705-dirty
	I1101 00:10:07.968672   30593 command_runner.go:130] > ID=buildroot
	I1101 00:10:07.968681   30593 command_runner.go:130] > VERSION_ID=2021.02.12
	I1101 00:10:07.968687   30593 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1101 00:10:07.968778   30593 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:10:07.968802   30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/addons for local assets ...
	I1101 00:10:07.968861   30593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/files for local assets ...
	I1101 00:10:07.968928   30593 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> 144632.pem in /etc/ssl/certs
	I1101 00:10:07.968937   30593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> /etc/ssl/certs/144632.pem
	I1101 00:10:07.969013   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:10:07.978134   30593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:10:07.999912   30593 start.go:303] post-start completed in 134.20357ms
	I1101 00:10:07.999936   30593 fix.go:56] fixHost completed within 19.476570148s
	I1101 00:10:07.999956   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:08.002715   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.003077   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:08.003109   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.003255   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:08.003478   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:08.003658   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:08.003796   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:08.003977   30593 main.go:141] libmachine: Using SSH client type: native
	I1101 00:10:08.004287   30593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1101 00:10:08.004297   30593 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:10:08.139625   30593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698797408.091239350
	
	I1101 00:10:08.139661   30593 fix.go:206] guest clock: 1698797408.091239350
	I1101 00:10:08.139672   30593 fix.go:219] Guest: 2023-11-01 00:10:08.09123935 +0000 UTC Remote: 2023-11-01 00:10:07.999939094 +0000 UTC m=+78.350442936 (delta=91.300256ms)
	I1101 00:10:08.139692   30593 fix.go:190] guest clock delta is within tolerance: 91.300256ms
	I1101 00:10:08.139699   30593 start.go:83] releasing machines lock for "multinode-391061-m02", held for 19.616342127s
	I1101 00:10:08.139723   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:08.140075   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
	I1101 00:10:08.142846   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.143203   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:08.143246   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.145734   30593 out.go:177] * Found network options:
	I1101 00:10:08.147426   30593 out.go:177]   - NO_PROXY=192.168.39.43
	W1101 00:10:08.148945   30593 proxy.go:119] fail to check proxy env: Error ip not in block
	I1101 00:10:08.148990   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:08.149744   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:08.149992   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:10:08.150087   30593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:10:08.150122   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	W1101 00:10:08.150204   30593 proxy.go:119] fail to check proxy env: Error ip not in block
	I1101 00:10:08.150272   30593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 00:10:08.150293   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:10:08.153130   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.153377   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.153609   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:08.153633   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.153818   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:10:00 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:10:08.153840   30593 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:10:08.153853   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:08.154005   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:10:08.154068   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:08.154141   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:10:08.154205   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:08.154260   30593 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:10:08.154322   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
	I1101 00:10:08.154355   30593 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
	I1101 00:10:08.266696   30593 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1101 00:10:08.266764   30593 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1101 00:10:08.266798   30593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:10:08.266854   30593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:10:08.282630   30593 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1101 00:10:08.282695   30593 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 00:10:08.282708   30593 start.go:472] detecting cgroup driver to use...
	I1101 00:10:08.282848   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:10:08.299593   30593 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1101 00:10:08.299879   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1101 00:10:08.309962   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1101 00:10:08.319802   30593 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1101 00:10:08.319855   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1101 00:10:08.329984   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:10:08.340324   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1101 00:10:08.350388   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:10:08.360362   30593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:10:08.370630   30593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1101 00:10:08.380841   30593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:10:08.389848   30593 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1101 00:10:08.389933   30593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:10:08.398827   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:10:08.509909   30593 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 00:10:08.527202   30593 start.go:472] detecting cgroup driver to use...
	I1101 00:10:08.527267   30593 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 00:10:08.539911   30593 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1101 00:10:08.540831   30593 command_runner.go:130] > [Unit]
	I1101 00:10:08.540847   30593 command_runner.go:130] > Description=Docker Application Container Engine
	I1101 00:10:08.540853   30593 command_runner.go:130] > Documentation=https://docs.docker.com
	I1101 00:10:08.540859   30593 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1101 00:10:08.540864   30593 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1101 00:10:08.540873   30593 command_runner.go:130] > StartLimitBurst=3
	I1101 00:10:08.540880   30593 command_runner.go:130] > StartLimitIntervalSec=60
	I1101 00:10:08.540884   30593 command_runner.go:130] > [Service]
	I1101 00:10:08.540890   30593 command_runner.go:130] > Type=notify
	I1101 00:10:08.540899   30593 command_runner.go:130] > Restart=on-failure
	I1101 00:10:08.540906   30593 command_runner.go:130] > Environment=NO_PROXY=192.168.39.43
	I1101 00:10:08.540915   30593 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1101 00:10:08.540932   30593 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1101 00:10:08.540943   30593 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1101 00:10:08.540952   30593 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1101 00:10:08.540961   30593 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1101 00:10:08.540970   30593 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1101 00:10:08.540980   30593 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1101 00:10:08.540993   30593 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1101 00:10:08.541002   30593 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1101 00:10:08.541009   30593 command_runner.go:130] > ExecStart=
	I1101 00:10:08.541024   30593 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1101 00:10:08.541035   30593 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1101 00:10:08.541042   30593 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1101 00:10:08.541051   30593 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1101 00:10:08.541057   30593 command_runner.go:130] > LimitNOFILE=infinity
	I1101 00:10:08.541062   30593 command_runner.go:130] > LimitNPROC=infinity
	I1101 00:10:08.541066   30593 command_runner.go:130] > LimitCORE=infinity
	I1101 00:10:08.541073   30593 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1101 00:10:08.541080   30593 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1101 00:10:08.541087   30593 command_runner.go:130] > TasksMax=infinity
	I1101 00:10:08.541091   30593 command_runner.go:130] > TimeoutStartSec=0
	I1101 00:10:08.541100   30593 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1101 00:10:08.541106   30593 command_runner.go:130] > Delegate=yes
	I1101 00:10:08.541112   30593 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1101 00:10:08.541122   30593 command_runner.go:130] > KillMode=process
	I1101 00:10:08.541128   30593 command_runner.go:130] > [Install]
	I1101 00:10:08.541133   30593 command_runner.go:130] > WantedBy=multi-user.target
	I1101 00:10:08.541558   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:10:08.556173   30593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:10:08.575016   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:10:08.587990   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:10:08.601691   30593 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 00:10:08.631342   30593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:10:08.644194   30593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:10:08.661548   30593 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1101 00:10:08.662099   30593 ssh_runner.go:195] Run: which cri-dockerd
	I1101 00:10:08.665592   30593 command_runner.go:130] > /usr/bin/cri-dockerd
	I1101 00:10:08.665782   30593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1101 00:10:08.674228   30593 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1101 00:10:08.690202   30593 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 00:10:08.793665   30593 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 00:10:08.913029   30593 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1101 00:10:08.913074   30593 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1101 00:10:08.928591   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:10:09.029624   30593 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 00:10:10.439233   30593 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.409560046s)
	I1101 00:10:10.439309   30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:10:10.540266   30593 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1101 00:10:10.657292   30593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:10:10.768655   30593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:10:10.871570   30593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1101 00:10:10.887421   30593 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1101 00:10:10.889772   30593 out.go:177] 
	W1101 00:10:10.891480   30593 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W1101 00:10:10.891500   30593 out.go:239] * 
	W1101 00:10:10.892409   30593 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 00:10:10.894220   30593 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-11-01 00:09:00 UTC, ends at Wed 2023-11-01 00:10:11 UTC. --
	Nov 01 00:09:34 multinode-391061 dockerd[846]: time="2023-11-01T00:09:34.354911439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 01 00:09:34 multinode-391061 dockerd[846]: time="2023-11-01T00:09:34.354921195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:09:36 multinode-391061 cri-dockerd[1070]: time="2023-11-01T00:09:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fde88b0de04da9bbd831a6d4c66ca23079816d358c2a073c1c844f3c823b3a46/resolv.conf as [nameserver 192.168.122.1]"
	Nov 01 00:09:36 multinode-391061 dockerd[846]: time="2023-11-01T00:09:36.610425616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 01 00:09:36 multinode-391061 dockerd[846]: time="2023-11-01T00:09:36.610678286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:09:36 multinode-391061 dockerd[846]: time="2023-11-01T00:09:36.610882319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 01 00:09:36 multinode-391061 dockerd[846]: time="2023-11-01T00:09:36.611022314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.374690136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.375186054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.375212070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.375225621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.385513235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.385648741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.385726478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.385835459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:09:40 multinode-391061 cri-dockerd[1070]: time="2023-11-01T00:09:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1607f59d6ba061ddfaed58cd098e43eb0a9636f0a88d126db9b8190b719c5a2c/resolv.conf as [nameserver 192.168.122.1]"
	Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.935555299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.938870881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.939075579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 01 00:09:40 multinode-391061 dockerd[846]: time="2023-11-01T00:09:40.939369832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:09:41 multinode-391061 cri-dockerd[1070]: time="2023-11-01T00:09:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dd4ac2bcf1f1a7e97a662352c7ff24fed55ebabd9072e6380c598ee47a8bd587/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Nov 01 00:09:41 multinode-391061 dockerd[846]: time="2023-11-01T00:09:41.205597128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 01 00:09:41 multinode-391061 dockerd[846]: time="2023-11-01T00:09:41.205714151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:09:41 multinode-391061 dockerd[846]: time="2023-11-01T00:09:41.205824051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 01 00:09:41 multinode-391061 dockerd[846]: time="2023-11-01T00:09:41.205840774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7977c47b23fe0       8c811b4aec35f       30 seconds ago      Running             busybox                   2                   dd4ac2bcf1f1a       busybox-5bc68d56bd-gm6t7
	c9a40438d8228       ead0a4a53df89       31 seconds ago      Running             coredns                   2                   1607f59d6ba06       coredns-5dd5756b68-dg5w7
	5c271018fdbe1       c7d1297425461       35 seconds ago      Running             kindnet-cni               2                   fde88b0de04da       kindnet-4jfj9
	bf95dea74238d       6e38f40d628db       37 seconds ago      Running             storage-provisioner       3                   40ae286f2e451       storage-provisioner
	a5893c8acc578       bfc896cf80fba       38 seconds ago      Running             kube-proxy                2                   d00d0faf2517f       kube-proxy-clsrp
	57698df880604       6d1b4fd1b182d       43 seconds ago      Running             kube-scheduler            2                   08911deed6912       kube-scheduler-multinode-391061
	16f5037339398       73deb9a3f7025       44 seconds ago      Running             etcd                      2                   df5b53c7fbd9f       etcd-multinode-391061
	c2c9b3f6a6e3c       10baa1ca17068       44 seconds ago      Running             kube-controller-manager   2                   686def3a5433e       kube-controller-manager-multinode-391061
	ad9ce8cffbbd9       5374347291230       44 seconds ago      Running             kube-apiserver            2                   058229c68e582       kube-apiserver-multinode-391061
	c8ec107c7b838       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       2                   6e72da581d8b3       storage-provisioner
	8c3065faff023       8c811b4aec35f       3 minutes ago       Exited              busybox                   1                   02ff0963ebcb2       busybox-5bc68d56bd-gm6t7
	8a050fec9e562       ead0a4a53df89       3 minutes ago       Exited              coredns                   1                   0922f8b627ba5       coredns-5dd5756b68-dg5w7
	7e5dd13abba8f       c7d1297425461       3 minutes ago       Exited              kindnet-cni               1                   d52c65ebca758       kindnet-4jfj9
	beeaf0ac020b3       bfc896cf80fba       3 minutes ago       Exited              kube-proxy                1                   5c355a51915ed       kube-proxy-clsrp
	37d9dd0022b92       73deb9a3f7025       3 minutes ago       Exited              etcd                      1                   92b70c8321ee1       etcd-multinode-391061
	c5ea3d84d06ff       6d1b4fd1b182d       3 minutes ago       Exited              kube-scheduler            1                   9f5176fde232a       kube-scheduler-multinode-391061
	32294fac02b31       10baa1ca17068       3 minutes ago       Exited              kube-controller-manager   1                   f576715f1f474       kube-controller-manager-multinode-391061
	a49a86a47d7cc       5374347291230       3 minutes ago       Exited              kube-apiserver            1                   36d5f0bd5cf2b       kube-apiserver-multinode-391061
	
	* 
	* ==> coredns [8a050fec9e56] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60917 - 58012 "HINFO IN 5379909798549472737.3172976332792896323. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021213453s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [c9a40438d822] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53059 - 12343 "HINFO IN 8994418390587536084.7952953180045631116. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.045076997s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-391061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-391061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=multinode-391061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T00_02_22_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:02:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-391061
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:10:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:09:38 +0000   Wed, 01 Nov 2023 00:02:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:09:38 +0000   Wed, 01 Nov 2023 00:02:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:09:38 +0000   Wed, 01 Nov 2023 00:02:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:09:38 +0000   Wed, 01 Nov 2023 00:09:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    multinode-391061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 47962989365f465fa8a710ebe1080a98
	  System UUID:                47962989-365f-465f-a8a7-10ebe1080a98
	  Boot ID:                    343d2a39-eea8-4e0b-8c4a-ac4d1581ade2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-gm6t7                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 coredns-5dd5756b68-dg5w7                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m39s
	  kube-system                 etcd-multinode-391061                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m51s
	  kube-system                 kindnet-4jfj9                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m39s
	  kube-system                 kube-apiserver-multinode-391061             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 kube-controller-manager-multinode-391061    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 kube-proxy-clsrp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m39s
	  kube-system                 kube-scheduler-multinode-391061             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m37s                  kube-proxy       
	  Normal  Starting                 37s                    kube-proxy       
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m59s (x8 over 7m59s)  kubelet          Node multinode-391061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m59s (x7 over 7m59s)  kubelet          Node multinode-391061 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m59s (x8 over 7m59s)  kubelet          Node multinode-391061 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m51s                  kubelet          Node multinode-391061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m51s                  kubelet          Node multinode-391061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m51s                  kubelet          Node multinode-391061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m40s                  node-controller  Node multinode-391061 event: Registered Node multinode-391061 in Controller
	  Normal  NodeReady                7m28s                  kubelet          Node multinode-391061 status is now: NodeReady
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m52s)  kubelet          Node multinode-391061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m52s)  kubelet          Node multinode-391061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m52s)  kubelet          Node multinode-391061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m33s                  node-controller  Node multinode-391061 event: Registered Node multinode-391061 in Controller
	  Normal  Starting                 47s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)      kubelet          Node multinode-391061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)      kubelet          Node multinode-391061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x7 over 47s)      kubelet          Node multinode-391061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  47s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           29s                    node-controller  Node multinode-391061 event: Registered Node multinode-391061 in Controller
	
	
	Name:               multinode-391061-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-391061-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:07:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-391061-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:08:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:07:25 +0000   Wed, 01 Nov 2023 00:07:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:07:25 +0000   Wed, 01 Nov 2023 00:07:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:07:25 +0000   Wed, 01 Nov 2023 00:07:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:07:25 +0000   Wed, 01 Nov 2023 00:07:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    multinode-391061-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d0e385c1d0d48059fa1f8426a07e391
	  System UUID:                0d0e385c-1d0d-4805-9fa1-f8426a07e391
	  Boot ID:                    cadfab0f-d241-492f-aeaa-46e564f9963c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-lgqxz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 kindnet-lcljq               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m50s
	  kube-system                 kube-proxy-rcnv9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m42s                  kube-proxy       
	  Normal  Starting                 2m55s                  kube-proxy       
	  Normal  Starting                 6m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m50s (x2 over 6m50s)  kubelet          Node multinode-391061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m50s (x2 over 6m50s)  kubelet          Node multinode-391061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m50s (x2 over 6m50s)  kubelet          Node multinode-391061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m34s                  kubelet          Node multinode-391061-m02 status is now: NodeReady
	  Normal  Starting                 2m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m58s (x2 over 2m58s)  kubelet          Node multinode-391061-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s (x2 over 2m58s)  kubelet          Node multinode-391061-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s (x2 over 2m58s)  kubelet          Node multinode-391061-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m53s                  node-controller  Node multinode-391061-m02 event: Registered Node multinode-391061-m02 in Controller
	  Normal  NodeReady                2m47s                  kubelet          Node multinode-391061-m02 status is now: NodeReady
	  Normal  RegisteredNode           29s                    node-controller  Node multinode-391061-m02 event: Registered Node multinode-391061-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 1 00:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064584] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.313265] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Nov 1 00:09] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.132278] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.339845] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.187073] systemd-fstab-generator[513]: Ignoring "noauto" for root device
	[  +0.098054] systemd-fstab-generator[524]: Ignoring "noauto" for root device
	[  +1.203746] systemd-fstab-generator[768]: Ignoring "noauto" for root device
	[  +0.291358] systemd-fstab-generator[807]: Ignoring "noauto" for root device
	[  +0.112174] systemd-fstab-generator[818]: Ignoring "noauto" for root device
	[  +0.133633] systemd-fstab-generator[831]: Ignoring "noauto" for root device
	[  +1.571299] systemd-fstab-generator[1015]: Ignoring "noauto" for root device
	[  +0.107000] systemd-fstab-generator[1026]: Ignoring "noauto" for root device
	[  +0.104810] systemd-fstab-generator[1037]: Ignoring "noauto" for root device
	[  +0.118817] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.123026] systemd-fstab-generator[1062]: Ignoring "noauto" for root device
	[ +12.006823] systemd-fstab-generator[1313]: Ignoring "noauto" for root device
	[  +0.390251] kauditd_printk_skb: 67 callbacks suppressed
	
	* 
	* ==> etcd [16f503733939] <==
	* {"level":"info","ts":"2023-11-01T00:09:28.19015Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T00:09:28.190278Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T00:09:28.191119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 switched to configuration voters=(4987603935014751745)"}
	{"level":"info","ts":"2023-11-01T00:09:28.191618Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e2f92b1da63e7b06","local-member-id":"4537875a7ae50e01","added-peer-id":"4537875a7ae50e01","added-peer-peer-urls":["https://192.168.39.43:2380"]}
	{"level":"info","ts":"2023-11-01T00:09:28.194717Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-01T00:09:28.198264Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"4537875a7ae50e01","initial-advertise-peer-urls":["https://192.168.39.43:2380"],"listen-peer-urls":["https://192.168.39.43:2380"],"advertise-client-urls":["https://192.168.39.43:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.43:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-01T00:09:28.198324Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-01T00:09:28.192852Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e2f92b1da63e7b06","local-member-id":"4537875a7ae50e01","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:09:28.198398Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:09:28.195222Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.43:2380"}
	{"level":"info","ts":"2023-11-01T00:09:28.203901Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.43:2380"}
	{"level":"info","ts":"2023-11-01T00:09:29.925884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 is starting a new election at term 3"}
	{"level":"info","ts":"2023-11-01T00:09:29.925965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-11-01T00:09:29.925987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 received MsgPreVoteResp from 4537875a7ae50e01 at term 3"}
	{"level":"info","ts":"2023-11-01T00:09:29.926006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became candidate at term 4"}
	{"level":"info","ts":"2023-11-01T00:09:29.926013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 received MsgVoteResp from 4537875a7ae50e01 at term 4"}
	{"level":"info","ts":"2023-11-01T00:09:29.926027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became leader at term 4"}
	{"level":"info","ts":"2023-11-01T00:09:29.926034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4537875a7ae50e01 elected leader 4537875a7ae50e01 at term 4"}
	{"level":"info","ts":"2023-11-01T00:09:29.929012Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4537875a7ae50e01","local-member-attributes":"{Name:multinode-391061 ClientURLs:[https://192.168.39.43:2379]}","request-path":"/0/members/4537875a7ae50e01/attributes","cluster-id":"e2f92b1da63e7b06","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T00:09:29.929031Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:09:29.929748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:09:29.930821Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.43:2379"}
	{"level":"info","ts":"2023-11-01T00:09:29.930917Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T00:09:29.931219Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T00:09:29.931355Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [37d9dd0022b9] <==
	* {"level":"info","ts":"2023-11-01T00:06:23.592713Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-01T00:06:25.029032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-01T00:06:25.029091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-01T00:06:25.029125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 received MsgPreVoteResp from 4537875a7ae50e01 at term 2"}
	{"level":"info","ts":"2023-11-01T00:06:25.029138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became candidate at term 3"}
	{"level":"info","ts":"2023-11-01T00:06:25.029144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 received MsgVoteResp from 4537875a7ae50e01 at term 3"}
	{"level":"info","ts":"2023-11-01T00:06:25.029151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became leader at term 3"}
	{"level":"info","ts":"2023-11-01T00:06:25.029158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4537875a7ae50e01 elected leader 4537875a7ae50e01 at term 3"}
	{"level":"info","ts":"2023-11-01T00:06:25.032053Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4537875a7ae50e01","local-member-attributes":"{Name:multinode-391061 ClientURLs:[https://192.168.39.43:2379]}","request-path":"/0/members/4537875a7ae50e01/attributes","cluster-id":"e2f92b1da63e7b06","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T00:06:25.032229Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:06:25.032298Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T00:06:25.032532Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-01T00:06:25.03234Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:06:25.033416Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.43:2379"}
	{"level":"info","ts":"2023-11-01T00:06:25.035467Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T00:08:24.506535Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-11-01T00:08:24.506671Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-391061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.43:2380"],"advertise-client-urls":["https://192.168.39.43:2379"]}
	{"level":"warn","ts":"2023-11-01T00:08:24.506833Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-01T00:08:24.506976Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-01T00:08:24.561334Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.43:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-01T00:08:24.561383Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.43:2379: use of closed network connection"}
	{"level":"info","ts":"2023-11-01T00:08:24.561438Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4537875a7ae50e01","current-leader-member-id":"4537875a7ae50e01"}
	{"level":"info","ts":"2023-11-01T00:08:24.566054Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.43:2380"}
	{"level":"info","ts":"2023-11-01T00:08:24.566194Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.43:2380"}
	{"level":"info","ts":"2023-11-01T00:08:24.566206Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-391061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.43:2380"],"advertise-client-urls":["https://192.168.39.43:2379"]}
	
	* 
	* ==> kernel <==
	*  00:10:12 up 1 min,  0 users,  load average: 0.53, 0.19, 0.06
	Linux multinode-391061 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [5c271018fdbe] <==
	* I1101 00:09:37.076606       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1101 00:09:37.076860       1 main.go:107] hostIP = 192.168.39.43
	podIP = 192.168.39.43
	I1101 00:09:37.077418       1 main.go:116] setting mtu 1500 for CNI 
	I1101 00:09:37.077435       1 main.go:146] kindnetd IP family: "ipv4"
	I1101 00:09:37.077457       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1101 00:09:37.765325       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I1101 00:09:37.765411       1 main.go:227] handling current node
	I1101 00:09:37.765817       1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
	I1101 00:09:37.765920       1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24] 
	I1101 00:09:37.766175       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.249 Flags: [] Table: 0} 
	I1101 00:09:47.778674       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I1101 00:09:47.778897       1 main.go:227] handling current node
	I1101 00:09:47.779259       1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
	I1101 00:09:47.779370       1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24] 
	I1101 00:09:57.791597       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I1101 00:09:57.791660       1 main.go:227] handling current node
	I1101 00:09:57.791697       1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
	I1101 00:09:57.791707       1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24] 
	I1101 00:10:07.806269       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I1101 00:10:07.806343       1 main.go:227] handling current node
	I1101 00:10:07.806360       1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
	I1101 00:10:07.806370       1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kindnet [7e5dd13abba8] <==
	* I1101 00:07:52.418568       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I1101 00:07:52.418863       1 main.go:227] handling current node
	I1101 00:07:52.419038       1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
	I1101 00:07:52.419172       1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24] 
	I1101 00:07:52.419455       1 main.go:223] Handling node with IPs: map[192.168.39.62:{}]
	I1101 00:07:52.419617       1 main.go:250] Node multinode-391061-m03 has CIDR [10.244.3.0/24] 
	I1101 00:08:02.433400       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I1101 00:08:02.433450       1 main.go:227] handling current node
	I1101 00:08:02.433469       1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
	I1101 00:08:02.433475       1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24] 
	I1101 00:08:02.433691       1 main.go:223] Handling node with IPs: map[192.168.39.62:{}]
	I1101 00:08:02.433721       1 main.go:250] Node multinode-391061-m03 has CIDR [10.244.2.0/24] 
	I1101 00:08:02.433954       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.62 Flags: [] Table: 0} 
	I1101 00:08:12.447883       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I1101 00:08:12.447998       1 main.go:227] handling current node
	I1101 00:08:12.448013       1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
	I1101 00:08:12.448020       1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24] 
	I1101 00:08:12.449781       1 main.go:223] Handling node with IPs: map[192.168.39.62:{}]
	I1101 00:08:12.449820       1 main.go:250] Node multinode-391061-m03 has CIDR [10.244.2.0/24] 
	I1101 00:08:22.463898       1 main.go:223] Handling node with IPs: map[192.168.39.43:{}]
	I1101 00:08:22.464012       1 main.go:227] handling current node
	I1101 00:08:22.464023       1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
	I1101 00:08:22.464028       1 main.go:250] Node multinode-391061-m02 has CIDR [10.244.1.0/24] 
	I1101 00:08:22.464151       1 main.go:223] Handling node with IPs: map[192.168.39.62:{}]
	I1101 00:08:22.464157       1 main.go:250] Node multinode-391061-m03 has CIDR [10.244.2.0/24] 
	
	* 
	* ==> kube-apiserver [a49a86a47d7c] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 00:08:34.314179       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 00:08:34.348000       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 00:08:34.432385       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [ad9ce8cffbbd] <==
	* I1101 00:09:31.332906       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1101 00:09:31.333121       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1101 00:09:31.331296       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1101 00:09:31.473440       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 00:09:31.478009       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 00:09:31.524950       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 00:09:31.528324       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 00:09:31.528549       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 00:09:31.528556       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 00:09:31.530253       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 00:09:31.536719       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 00:09:31.537039       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 00:09:31.537669       1 aggregator.go:166] initial CRD sync complete...
	I1101 00:09:31.538387       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 00:09:31.538397       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 00:09:31.538404       1 cache.go:39] Caches are synced for autoregister controller
	I1101 00:09:32.337365       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1101 00:09:32.766705       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.43]
	I1101 00:09:32.772589       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 00:09:32.786650       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 00:09:34.247198       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 00:09:34.484576       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 00:09:34.500470       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 00:09:34.591116       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 00:09:34.605275       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [32294fac02b3] <==
	* I1101 00:07:30.756798       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="109.824µs"
	I1101 00:07:31.546793       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="99.538µs"
	I1101 00:07:31.550888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.487µs"
	I1101 00:07:51.172866       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-lgqxz"
	I1101 00:07:51.184193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.907187ms"
	I1101 00:07:51.184323       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.747µs"
	I1101 00:07:51.197194       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.955962ms"
	I1101 00:07:51.197472       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="228.664µs"
	I1101 00:07:51.206401       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="73.711µs"
	I1101 00:07:53.042996       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.681385ms"
	I1101 00:07:53.043314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="263.588µs"
	I1101 00:07:54.181191       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-391061-m02"
	I1101 00:07:54.292099       1 event.go:307] "Event occurred" object="multinode-391061-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-391061-m03 event: Removing Node multinode-391061-m03 from Controller"
	I1101 00:07:55.043180       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-391061-m02"
	I1101 00:07:55.043308       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-391061-m03\" does not exist"
	I1101 00:07:55.044961       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-8p7xh" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-8p7xh"
	I1101 00:07:55.067698       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-391061-m03" podCIDRs=["10.244.2.0/24"]
	I1101 00:07:55.878604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.605µs"
	I1101 00:07:56.054311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="75.305µs"
	I1101 00:07:56.060178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.941µs"
	I1101 00:07:56.064177       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.937µs"
	I1101 00:07:59.293276       1 event.go:307] "Event occurred" object="multinode-391061-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-391061-m03 event: Registered Node multinode-391061-m03 in Controller"
	I1101 00:08:20.322442       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-391061-m02"
	I1101 00:08:22.787615       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-391061-m02"
	I1101 00:08:24.299109       1 event.go:307] "Event occurred" object="multinode-391061-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-391061-m03 event: Removing Node multinode-391061-m03 from Controller"
	
	* 
	* ==> kube-controller-manager [c2c9b3f6a6e3] <==
	* I1101 00:09:43.752727       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1101 00:09:43.754170       1 shared_informer.go:318] Caches are synced for crt configmap
	I1101 00:09:43.756628       1 shared_informer.go:318] Caches are synced for ephemeral
	I1101 00:09:43.758945       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1101 00:09:43.761079       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1101 00:09:43.766052       1 shared_informer.go:318] Caches are synced for GC
	I1101 00:09:43.768414       1 shared_informer.go:318] Caches are synced for node
	I1101 00:09:43.768654       1 range_allocator.go:174] "Sending events to api server"
	I1101 00:09:43.769022       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1101 00:09:43.769049       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1101 00:09:43.769056       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1101 00:09:43.774287       1 shared_informer.go:318] Caches are synced for disruption
	I1101 00:09:43.776800       1 shared_informer.go:318] Caches are synced for cronjob
	I1101 00:09:43.785117       1 shared_informer.go:318] Caches are synced for PV protection
	I1101 00:09:43.805894       1 shared_informer.go:318] Caches are synced for deployment
	I1101 00:09:43.809436       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1101 00:09:43.809815       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="119.601µs"
	I1101 00:09:43.809826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.574µs"
	I1101 00:09:43.819800       1 shared_informer.go:318] Caches are synced for persistent volume
	I1101 00:09:43.863962       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 00:09:43.896418       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 00:09:43.899176       1 shared_informer.go:318] Caches are synced for attach detach
	I1101 00:09:44.306189       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 00:09:44.344919       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 00:09:44.344971       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [a5893c8acc57] <==
	* I1101 00:09:33.971163       1 server_others.go:69] "Using iptables proxy"
	I1101 00:09:34.046638       1 node.go:141] Successfully retrieved node IP: 192.168.39.43
	I1101 00:09:34.159869       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 00:09:34.159893       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 00:09:34.183447       1 server_others.go:152] "Using iptables Proxier"
	I1101 00:09:34.183946       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 00:09:34.186201       1 server.go:846] "Version info" version="v1.28.3"
	I1101 00:09:34.186217       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:09:34.188049       1 config.go:188] "Starting service config controller"
	I1101 00:09:34.188282       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 00:09:34.188372       1 config.go:97] "Starting endpoint slice config controller"
	I1101 00:09:34.188378       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 00:09:34.191644       1 config.go:315] "Starting node config controller"
	I1101 00:09:34.191652       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 00:09:34.288910       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 00:09:34.288969       1 shared_informer.go:318] Caches are synced for service config
	I1101 00:09:34.337116       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [beeaf0ac020b] <==
	* I1101 00:06:28.096795       1 server_others.go:69] "Using iptables proxy"
	I1101 00:06:28.127555       1 node.go:141] Successfully retrieved node IP: 192.168.39.43
	I1101 00:06:28.365777       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1101 00:06:28.365834       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 00:06:28.368654       1 server_others.go:152] "Using iptables Proxier"
	I1101 00:06:28.369134       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 00:06:28.369499       1 server.go:846] "Version info" version="v1.28.3"
	I1101 00:06:28.369511       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:06:28.375081       1 config.go:188] "Starting service config controller"
	I1101 00:06:28.375497       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 00:06:28.375528       1 config.go:97] "Starting endpoint slice config controller"
	I1101 00:06:28.375533       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 00:06:28.377256       1 config.go:315] "Starting node config controller"
	I1101 00:06:28.377295       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 00:06:28.641604       1 shared_informer.go:318] Caches are synced for node config
	I1101 00:06:28.642049       1 shared_informer.go:318] Caches are synced for service config
	I1101 00:06:28.642162       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [57698df88060] <==
	* I1101 00:09:29.511193       1 serving.go:348] Generated self-signed cert in-memory
	W1101 00:09:31.436874       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 00:09:31.436979       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 00:09:31.437012       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 00:09:31.437076       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 00:09:31.479769       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1101 00:09:31.480029       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:09:31.481854       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 00:09:31.482174       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:09:31.483071       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1101 00:09:31.483309       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 00:09:31.583076       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [c5ea3d84d06f] <==
	* I1101 00:06:23.847682       1 serving.go:348] Generated self-signed cert in-memory
	W1101 00:06:26.463897       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 00:06:26.464011       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 00:06:26.464023       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 00:06:26.464029       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 00:06:26.499405       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1101 00:06:26.499451       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:06:26.501549       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1101 00:06:26.502431       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 00:06:26.502487       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:06:26.502608       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 00:06:26.602635       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:08:24.434312       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1101 00:08:24.434482       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1101 00:08:24.435090       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1101 00:08:24.435338       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 00:09:00 UTC, ends at Wed 2023-11-01 00:10:12 UTC. --
	Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.399191    1319 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.399277    1319 projected.go:198] Error preparing data for projected volume kube-api-access-r4kj9 for pod default/busybox-5bc68d56bd-gm6t7: object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.399346    1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9 podName:2c54225b-c1bf-4e3d-9de3-dfc1676104bf nodeName:}" failed. No retries permitted until 2023-11-01 00:09:32.899330697 +0000 UTC m=+7.850513782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r4kj9" (UniqueName: "kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9") pod "busybox-5bc68d56bd-gm6t7" (UID: "2c54225b-c1bf-4e3d-9de3-dfc1676104bf") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.970970    1319 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.971033    1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb94555e-1465-4dec-9d6d-ebcbec02841e-config-volume podName:eb94555e-1465-4dec-9d6d-ebcbec02841e nodeName:}" failed. No retries permitted until 2023-11-01 00:09:33.971019978 +0000 UTC m=+8.922203058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eb94555e-1465-4dec-9d6d-ebcbec02841e-config-volume") pod "coredns-5dd5756b68-dg5w7" (UID: "eb94555e-1465-4dec-9d6d-ebcbec02841e") : object "kube-system"/"coredns" not registered
	Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.971438    1319 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.971452    1319 projected.go:198] Error preparing data for projected volume kube-api-access-r4kj9 for pod default/busybox-5bc68d56bd-gm6t7: object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:09:32 multinode-391061 kubelet[1319]: E1101 00:09:32.972099    1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9 podName:2c54225b-c1bf-4e3d-9de3-dfc1676104bf nodeName:}" failed. No retries permitted until 2023-11-01 00:09:33.97204017 +0000 UTC m=+8.923223252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r4kj9" (UniqueName: "kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9") pod "busybox-5bc68d56bd-gm6t7" (UID: "2c54225b-c1bf-4e3d-9de3-dfc1676104bf") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:09:33 multinode-391061 kubelet[1319]: E1101 00:09:33.983261    1319 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:09:33 multinode-391061 kubelet[1319]: E1101 00:09:33.983357    1319 projected.go:198] Error preparing data for projected volume kube-api-access-r4kj9 for pod default/busybox-5bc68d56bd-gm6t7: object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:09:33 multinode-391061 kubelet[1319]: E1101 00:09:33.983414    1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9 podName:2c54225b-c1bf-4e3d-9de3-dfc1676104bf nodeName:}" failed. No retries permitted until 2023-11-01 00:09:35.983397975 +0000 UTC m=+10.934581055 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r4kj9" (UniqueName: "kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9") pod "busybox-5bc68d56bd-gm6t7" (UID: "2c54225b-c1bf-4e3d-9de3-dfc1676104bf") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:09:33 multinode-391061 kubelet[1319]: E1101 00:09:33.983865    1319 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 00:09:33 multinode-391061 kubelet[1319]: E1101 00:09:33.983912    1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb94555e-1465-4dec-9d6d-ebcbec02841e-config-volume podName:eb94555e-1465-4dec-9d6d-ebcbec02841e nodeName:}" failed. No retries permitted until 2023-11-01 00:09:35.983901106 +0000 UTC m=+10.935084185 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eb94555e-1465-4dec-9d6d-ebcbec02841e-config-volume") pod "coredns-5dd5756b68-dg5w7" (UID: "eb94555e-1465-4dec-9d6d-ebcbec02841e") : object "kube-system"/"coredns" not registered
	Nov 01 00:09:34 multinode-391061 kubelet[1319]: I1101 00:09:34.131973    1319 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40ae286f2e451298335e60ff530480a9945a5b00cbbb6a4b638e780b78fbf458"
	Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.025454    1319 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.025667    1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb94555e-1465-4dec-9d6d-ebcbec02841e-config-volume podName:eb94555e-1465-4dec-9d6d-ebcbec02841e nodeName:}" failed. No retries permitted until 2023-11-01 00:09:40.02564935 +0000 UTC m=+14.976832432 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eb94555e-1465-4dec-9d6d-ebcbec02841e-config-volume") pod "coredns-5dd5756b68-dg5w7" (UID: "eb94555e-1465-4dec-9d6d-ebcbec02841e") : object "kube-system"/"coredns" not registered
	Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.026184    1319 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.026205    1319 projected.go:198] Error preparing data for projected volume kube-api-access-r4kj9 for pod default/busybox-5bc68d56bd-gm6t7: object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.026250    1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9 podName:2c54225b-c1bf-4e3d-9de3-dfc1676104bf nodeName:}" failed. No retries permitted until 2023-11-01 00:09:40.026238503 +0000 UTC m=+14.977421570 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r4kj9" (UniqueName: "kubernetes.io/projected/2c54225b-c1bf-4e3d-9de3-dfc1676104bf-kube-api-access-r4kj9") pod "busybox-5bc68d56bd-gm6t7" (UID: "2c54225b-c1bf-4e3d-9de3-dfc1676104bf") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.504880    1319 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-dg5w7" podUID="eb94555e-1465-4dec-9d6d-ebcbec02841e"
	Nov 01 00:09:36 multinode-391061 kubelet[1319]: I1101 00:09:36.504956    1319 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fde88b0de04da9bbd831a6d4c66ca23079816d358c2a073c1c844f3c823b3a46"
	Nov 01 00:09:36 multinode-391061 kubelet[1319]: E1101 00:09:36.507590    1319 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-gm6t7" podUID="2c54225b-c1bf-4e3d-9de3-dfc1676104bf"
	Nov 01 00:09:38 multinode-391061 kubelet[1319]: I1101 00:09:38.193647    1319 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 01 00:09:40 multinode-391061 kubelet[1319]: I1101 00:09:40.831302    1319 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1607f59d6ba061ddfaed58cd098e43eb0a9636f0a88d126db9b8190b719c5a2c"
	Nov 01 00:09:41 multinode-391061 kubelet[1319]: I1101 00:09:41.101886    1319 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd4ac2bcf1f1a7e97a662352c7ff24fed55ebabd9072e6380c598ee47a8bd587"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-391061 -n multinode-391061
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-391061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (83.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-993392 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-993392 "sudo crictl images -o json": exit status 1 (269.924198ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-993392 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-993392 -n old-k8s-version-993392
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-993392 logs -n 25
E1101 00:36:41.721606   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:41.849046   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
E1101 00:36:41.854427   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
E1101 00:36:41.864724   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
E1101 00:36:41.885646   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
E1101 00:36:41.926376   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-993392 logs -n 25: (1.769278423s)
E1101 00:36:42.006974   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
E1101 00:36:42.169866   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p kubenet-925990 sudo                                 | kubenet-925990               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:33 UTC | 01 Nov 23 00:33 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p kubenet-925990 sudo                                 | kubenet-925990               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:33 UTC |                     |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p kubenet-925990 sudo                                 | kubenet-925990               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:33 UTC | 01 Nov 23 00:33 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p kubenet-925990 sudo find                            | kubenet-925990               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:33 UTC | 01 Nov 23 00:33 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p kubenet-925990 sudo crio                            | kubenet-925990               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:33 UTC | 01 Nov 23 00:33 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p kubenet-925990                                      | kubenet-925990               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:33 UTC | 01 Nov 23 00:33 UTC |
	| delete  | -p                                                     | disable-driver-mounts-256146 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:33 UTC | 01 Nov 23 00:33 UTC |
	|         | disable-driver-mounts-256146                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-195256 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:33 UTC | 01 Nov 23 00:34 UTC |
	|         | default-k8s-diff-port-195256                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-993392        | old-k8s-version-993392       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:34 UTC | 01 Nov 23 00:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-993392                              | old-k8s-version-993392       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:34 UTC | 01 Nov 23 00:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-658664             | no-preload-658664            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:34 UTC | 01 Nov 23 00:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-658664                                   | no-preload-658664            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:34 UTC | 01 Nov 23 00:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-195256  | default-k8s-diff-port-195256 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:34 UTC | 01 Nov 23 00:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-195256 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:34 UTC | 01 Nov 23 00:35 UTC |
	|         | default-k8s-diff-port-195256                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-503881            | embed-certs-503881           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:34 UTC | 01 Nov 23 00:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-993392             | old-k8s-version-993392       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:34 UTC | 01 Nov 23 00:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-993392                              | old-k8s-version-993392       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:34 UTC | 01 Nov 23 00:36 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| stop    | -p embed-certs-503881                                  | embed-certs-503881           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:34 UTC | 01 Nov 23 00:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-658664                  | no-preload-658664            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:34 UTC | 01 Nov 23 00:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-658664                                   | no-preload-658664            | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-195256       | default-k8s-diff-port-195256 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:35 UTC | 01 Nov 23 00:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-195256 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:35 UTC |                     |
	|         | default-k8s-diff-port-195256                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-503881                 | embed-certs-503881           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:35 UTC | 01 Nov 23 00:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-503881                                  | embed-certs-503881           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:35 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| ssh     | -p old-k8s-version-993392 sudo                         | old-k8s-version-993392       | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:36 UTC |                     |
	|         | crictl images -o json                                  |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:35:09
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:35:09.235419   60145 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:35:09.235517   60145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:35:09.235529   60145 out.go:309] Setting ErrFile to fd 2...
	I1101 00:35:09.235534   60145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:35:09.235708   60145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
	I1101 00:35:09.236253   60145 out.go:303] Setting JSON to false
	I1101 00:35:09.237135   60145 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4659,"bootTime":1698794251,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 00:35:09.237194   60145 start.go:138] virtualization: kvm guest
	I1101 00:35:09.239736   60145 out.go:177] * [embed-certs-503881] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1101 00:35:09.241322   60145 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:35:09.241331   60145 notify.go:220] Checking for updates...
	I1101 00:35:09.243102   60145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:35:09.244926   60145 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:35:09.246399   60145 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	I1101 00:35:09.247891   60145 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 00:35:09.249338   60145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:35:09.251357   60145 config.go:182] Loaded profile config "embed-certs-503881": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:35:09.251969   60145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:09.252076   60145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:09.266665   60145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I1101 00:35:09.267024   60145 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:09.267577   60145 main.go:141] libmachine: Using API Version  1
	I1101 00:35:09.267597   60145 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:09.267982   60145 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:09.268154   60145 main.go:141] libmachine: (embed-certs-503881) Calling .DriverName
	I1101 00:35:09.268392   60145 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:35:09.268684   60145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:09.268723   60145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:09.283085   60145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I1101 00:35:09.283487   60145 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:09.283851   60145 main.go:141] libmachine: Using API Version  1
	I1101 00:35:09.283873   60145 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:09.284236   60145 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:09.284395   60145 main.go:141] libmachine: (embed-certs-503881) Calling .DriverName
	I1101 00:35:09.321385   60145 out.go:177] * Using the kvm2 driver based on existing profile
	I1101 00:35:09.322997   60145 start.go:298] selected driver: kvm2
	I1101 00:35:09.323021   60145 start.go:902] validating driver "kvm2" against &{Name:embed-certs-503881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-503881 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.122 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:35:09.323162   60145 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:35:09.323900   60145 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:35:09.324009   60145 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7251/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1101 00:35:09.338839   60145 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1101 00:35:09.339233   60145 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:35:09.339308   60145 cni.go:84] Creating CNI manager for ""
	I1101 00:35:09.339330   60145 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 00:35:09.339350   60145 start_flags.go:323] config:
	{Name:embed-certs-503881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-503881 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.122 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:35:09.339541   60145 iso.go:125] acquiring lock: {Name:mk56e0e42e3cb427bae1fd4521b75db693021ac1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:35:09.341784   60145 out.go:177] * Starting control plane node embed-certs-503881 in cluster embed-certs-503881
	I1101 00:35:04.709327   60028 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1101 00:35:04.709375   60028 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1101 00:35:04.709388   60028 cache.go:56] Caching tarball of preloaded images
	I1101 00:35:04.709514   60028 preload.go:174] Found /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 00:35:04.709534   60028 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1101 00:35:04.709677   60028 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/default-k8s-diff-port-195256/config.json ...
	I1101 00:35:04.709933   60028 start.go:365] acquiring machines lock for default-k8s-diff-port-195256: {Name:mkd250049361a5d831a3d31c273569334737e54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:35:05.306156   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:05.306878   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | unable to find current IP address of domain old-k8s-version-993392 in network mk-old-k8s-version-993392
	I1101 00:35:05.306912   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | I1101 00:35:05.306791   59763 retry.go:31] will retry after 1.967827811s: waiting for machine to come up
	I1101 00:35:07.276050   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:07.276621   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | unable to find current IP address of domain old-k8s-version-993392 in network mk-old-k8s-version-993392
	I1101 00:35:07.276653   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | I1101 00:35:07.276575   59763 retry.go:31] will retry after 3.167342299s: waiting for machine to come up
	I1101 00:35:09.343448   60145 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1101 00:35:09.343499   60145 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1101 00:35:09.343513   60145 cache.go:56] Caching tarball of preloaded images
	I1101 00:35:09.343611   60145 preload.go:174] Found /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 00:35:09.343626   60145 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1101 00:35:09.343741   60145 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/embed-certs-503881/config.json ...
	I1101 00:35:09.343954   60145 start.go:365] acquiring machines lock for embed-certs-503881: {Name:mkd250049361a5d831a3d31c273569334737e54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 00:35:10.445538   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:10.446040   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | unable to find current IP address of domain old-k8s-version-993392 in network mk-old-k8s-version-993392
	I1101 00:35:10.446068   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | I1101 00:35:10.445984   59763 retry.go:31] will retry after 2.89065487s: waiting for machine to come up
	I1101 00:35:13.339747   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.340268   59728 main.go:141] libmachine: (old-k8s-version-993392) Found IP for machine: 192.168.39.70
	I1101 00:35:13.340286   59728 main.go:141] libmachine: (old-k8s-version-993392) Reserving static IP address...
	I1101 00:35:13.340297   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has current primary IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.340744   59728 main.go:141] libmachine: (old-k8s-version-993392) Reserved static IP address: 192.168.39.70
	I1101 00:35:13.340772   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "old-k8s-version-993392", mac: "52:54:00:f4:ea:1c", ip: "192.168.39.70"} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:13.340785   59728 main.go:141] libmachine: (old-k8s-version-993392) Waiting for SSH to be available...
	I1101 00:35:13.340801   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | skip adding static IP to network mk-old-k8s-version-993392 - found existing host DHCP lease matching {name: "old-k8s-version-993392", mac: "52:54:00:f4:ea:1c", ip: "192.168.39.70"}
	I1101 00:35:13.340809   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | Getting to WaitForSSH function...
	I1101 00:35:13.342952   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.343282   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:13.343310   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.343463   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | Using SSH client type: external
	I1101 00:35:13.343487   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/old-k8s-version-993392/id_rsa (-rw-------)
	I1101 00:35:13.343508   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7251/.minikube/machines/old-k8s-version-993392/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 00:35:13.343518   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | About to run SSH command:
	I1101 00:35:13.343531   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | exit 0
	I1101 00:35:13.426656   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | SSH cmd err, output: <nil>: 
	I1101 00:35:13.427089   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetConfigRaw
	I1101 00:35:13.427811   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetIP
	I1101 00:35:13.430208   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.430577   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:13.430612   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.430837   59728 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/config.json ...
	I1101 00:35:13.431059   59728 machine.go:88] provisioning docker machine ...
	I1101 00:35:13.431076   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .DriverName
	I1101 00:35:13.431302   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetMachineName
	I1101 00:35:13.431465   59728 buildroot.go:166] provisioning hostname "old-k8s-version-993392"
	I1101 00:35:13.431482   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetMachineName
	I1101 00:35:13.431634   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:13.433584   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.433860   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:13.433895   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.433992   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:13.434164   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:13.434324   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:13.434584   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:13.434748   59728 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:13.435107   59728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1101 00:35:13.435124   59728 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-993392 && echo "old-k8s-version-993392" | sudo tee /etc/hostname
	I1101 00:35:13.561881   59728 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-993392
	
	I1101 00:35:13.561919   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:13.564622   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.565012   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:13.565059   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.565166   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:13.565370   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:13.565553   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:13.565705   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:13.565865   59728 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:13.566229   59728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1101 00:35:13.566249   59728 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-993392' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-993392/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-993392' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:35:13.684395   59728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:35:13.684425   59728 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7251/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7251/.minikube}
	I1101 00:35:13.684446   59728 buildroot.go:174] setting up certificates
	I1101 00:35:13.684455   59728 provision.go:83] configureAuth start
	I1101 00:35:13.684463   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetMachineName
	I1101 00:35:13.684783   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetIP
	I1101 00:35:13.687530   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.687839   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:13.687873   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.688016   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:13.690545   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.690829   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:13.690861   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.691005   59728 provision.go:138] copyHostCerts
	I1101 00:35:13.691080   59728 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem, removing ...
	I1101 00:35:13.691093   59728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
	I1101 00:35:13.691159   59728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem (1082 bytes)
	I1101 00:35:13.691254   59728 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem, removing ...
	I1101 00:35:13.691261   59728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
	I1101 00:35:13.691285   59728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem (1123 bytes)
	I1101 00:35:13.691346   59728 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem, removing ...
	I1101 00:35:13.691353   59728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
	I1101 00:35:13.691374   59728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem (1675 bytes)
	I1101 00:35:13.691444   59728 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-993392 san=[192.168.39.70 192.168.39.70 localhost 127.0.0.1 minikube old-k8s-version-993392]
	I1101 00:35:13.779968   59728 provision.go:172] copyRemoteCerts
	I1101 00:35:13.780036   59728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:35:13.780061   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:13.782836   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.783107   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:13.783143   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.783348   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:13.783570   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:13.783747   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:13.783961   59728 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/old-k8s-version-993392/id_rsa Username:docker}
	I1101 00:35:13.867403   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 00:35:13.890015   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 00:35:13.912052   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:35:13.934078   59728 provision.go:86] duration metric: configureAuth took 249.611347ms
	I1101 00:35:13.934107   59728 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:35:13.934348   59728 config.go:182] Loaded profile config "old-k8s-version-993392": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1101 00:35:13.934376   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .DriverName
	I1101 00:35:13.934664   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:13.937468   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.937927   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:13.937964   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:13.938134   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:13.938348   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:13.938475   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:13.938617   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:13.938761   59728 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:13.939137   59728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1101 00:35:13.939153   59728 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 00:35:14.047976   59728 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1101 00:35:14.048001   59728 buildroot.go:70] root file system type: tmpfs
	I1101 00:35:14.048120   59728 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 00:35:14.048139   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:14.051168   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:14.051607   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:14.051641   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:14.051813   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:14.051979   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:14.052117   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:14.052251   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:14.052380   59728 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:14.052773   59728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1101 00:35:14.052861   59728 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 00:35:14.172281   59728 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 00:35:14.172334   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:14.175028   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:14.175381   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:14.175420   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:14.175577   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:14.175837   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:14.176040   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:14.176166   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:14.176324   59728 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:14.176646   59728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1101 00:35:14.176665   59728 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 00:35:15.037059   59728 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1101 00:35:15.037093   59728 machine.go:91] provisioned docker machine in 1.606017983s
	I1101 00:35:15.037106   59728 start.go:300] post-start starting for "old-k8s-version-993392" (driver="kvm2")
	I1101 00:35:15.037119   59728 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:35:15.037142   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .DriverName
	I1101 00:35:15.037515   59728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:35:15.037550   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:15.040369   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:15.040729   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:15.040772   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:15.040925   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:15.041140   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:15.041331   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:15.041475   59728 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/old-k8s-version-993392/id_rsa Username:docker}
	I1101 00:35:15.291364   59907 start.go:369] acquired machines lock for "no-preload-658664" in 16.695305945s
	I1101 00:35:15.291409   59907 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:35:15.291419   59907 fix.go:54] fixHost starting: 
	I1101 00:35:15.291825   59907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:15.291872   59907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:15.311039   59907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41775
	I1101 00:35:15.311489   59907 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:15.311994   59907 main.go:141] libmachine: Using API Version  1
	I1101 00:35:15.312049   59907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:15.312426   59907 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:15.312590   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:35:15.312753   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetState
	I1101 00:35:15.314258   59907 fix.go:102] recreateIfNeeded on no-preload-658664: state=Stopped err=<nil>
	I1101 00:35:15.314292   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	W1101 00:35:15.314452   59907 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:35:15.316474   59907 out.go:177] * Restarting existing kvm2 VM for "no-preload-658664" ...
	I1101 00:35:15.135817   59728 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:35:15.142357   59728 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:35:15.142391   59728 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/addons for local assets ...
	I1101 00:35:15.142472   59728 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/files for local assets ...
	I1101 00:35:15.142612   59728 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> 144632.pem in /etc/ssl/certs
	I1101 00:35:15.142743   59728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:35:15.154866   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:35:15.177188   59728 start.go:303] post-start completed in 140.065964ms
	I1101 00:35:15.177218   59728 fix.go:56] fixHost completed within 19.962657025s
	I1101 00:35:15.177242   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:15.180121   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:15.180499   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:15.180537   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:15.180684   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:15.180898   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:15.181109   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:15.181242   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:15.181419   59728 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:15.181733   59728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1101 00:35:15.181746   59728 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:35:15.291181   59728 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698798915.244267313
	
	I1101 00:35:15.291208   59728 fix.go:206] guest clock: 1698798915.244267313
	I1101 00:35:15.291219   59728 fix.go:219] Guest: 2023-11-01 00:35:15.244267313 +0000 UTC Remote: 2023-11-01 00:35:15.177223557 +0000 UTC m=+20.135765065 (delta=67.043756ms)
	I1101 00:35:15.291268   59728 fix.go:190] guest clock delta is within tolerance: 67.043756ms
	I1101 00:35:15.291275   59728 start.go:83] releasing machines lock for "old-k8s-version-993392", held for 20.076726731s
	I1101 00:35:15.291309   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .DriverName
	I1101 00:35:15.291615   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetIP
	I1101 00:35:15.294488   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:15.294831   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:15.294874   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:15.295039   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .DriverName
	I1101 00:35:15.295569   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .DriverName
	I1101 00:35:15.295750   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .DriverName
	I1101 00:35:15.295846   59728 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:35:15.295887   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:15.296020   59728 ssh_runner.go:195] Run: cat /version.json
	I1101 00:35:15.296047   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:15.298648   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:15.299023   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:15.299061   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:15.299111   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:15.299231   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:15.299405   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:15.299503   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:15.299540   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:15.299572   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:15.299709   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:15.299717   59728 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/old-k8s-version-993392/id_rsa Username:docker}
	I1101 00:35:15.299869   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:15.300027   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:15.300146   59728 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/old-k8s-version-993392/id_rsa Username:docker}
	I1101 00:35:15.385138   59728 ssh_runner.go:195] Run: systemctl --version
	I1101 00:35:15.412437   59728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 00:35:15.418353   59728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:35:15.418433   59728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1101 00:35:15.428577   59728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1101 00:35:15.447498   59728 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 00:35:15.447528   59728 start.go:472] detecting cgroup driver to use...
	I1101 00:35:15.447666   59728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:35:15.476915   59728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1101 00:35:15.487890   59728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1101 00:35:15.497097   59728 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1101 00:35:15.497151   59728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1101 00:35:15.506003   59728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:35:15.515214   59728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1101 00:35:15.524281   59728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:35:15.533202   59728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:35:15.542332   59728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1101 00:35:15.551157   59728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:35:15.559253   59728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:35:15.567327   59728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:35:15.668001   59728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 00:35:15.685239   59728 start.go:472] detecting cgroup driver to use...
	I1101 00:35:15.685319   59728 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 00:35:15.702282   59728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:35:15.719566   59728 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:35:15.736212   59728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:35:15.748469   59728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:35:15.759693   59728 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 00:35:15.785652   59728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:35:15.798272   59728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:35:15.815403   59728 ssh_runner.go:195] Run: which cri-dockerd
	I1101 00:35:15.819393   59728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1101 00:35:15.827642   59728 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1101 00:35:15.844635   59728 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 00:35:15.953431   59728 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 00:35:16.067705   59728 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1101 00:35:16.067868   59728 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1101 00:35:16.083608   59728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:35:16.210028   59728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 00:35:17.682719   59728 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.472645709s)
	I1101 00:35:17.682797   59728 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 00:35:17.711222   59728 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 00:35:15.318233   59907 main.go:141] libmachine: (no-preload-658664) Calling .Start
	I1101 00:35:15.318442   59907 main.go:141] libmachine: (no-preload-658664) Ensuring networks are active...
	I1101 00:35:15.319337   59907 main.go:141] libmachine: (no-preload-658664) Ensuring network default is active
	I1101 00:35:15.319660   59907 main.go:141] libmachine: (no-preload-658664) Ensuring network mk-no-preload-658664 is active
	I1101 00:35:15.320006   59907 main.go:141] libmachine: (no-preload-658664) Getting domain xml...
	I1101 00:35:15.320707   59907 main.go:141] libmachine: (no-preload-658664) Creating domain...
	I1101 00:35:16.620215   59907 main.go:141] libmachine: (no-preload-658664) Waiting to get IP...
	I1101 00:35:16.621044   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:16.621542   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:16.621632   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:16.621519   60191 retry.go:31] will retry after 192.887682ms: waiting for machine to come up
	I1101 00:35:16.816184   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:16.816786   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:16.816814   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:16.816730   60191 retry.go:31] will retry after 268.503813ms: waiting for machine to come up
	I1101 00:35:17.087319   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:17.087813   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:17.087837   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:17.087771   60191 retry.go:31] will retry after 471.287558ms: waiting for machine to come up
	I1101 00:35:17.560533   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:17.561057   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:17.561093   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:17.560994   60191 retry.go:31] will retry after 552.560833ms: waiting for machine to come up
	I1101 00:35:18.115366   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:18.115984   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:18.116144   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:18.116092   60191 retry.go:31] will retry after 611.517299ms: waiting for machine to come up
	I1101 00:35:17.742852   59728 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
	I1101 00:35:17.742894   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetIP
	I1101 00:35:17.745585   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:17.745941   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:17.745973   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:17.746216   59728 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 00:35:17.750403   59728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:35:17.762691   59728 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1101 00:35:17.762745   59728 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:35:17.785197   59728 docker.go:699] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:3.1
	
	-- /stdout --
	I1101 00:35:17.785220   59728 docker.go:705] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I1101 00:35:17.785260   59728 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1101 00:35:17.796066   59728 ssh_runner.go:195] Run: which lz4
	I1101 00:35:17.800006   59728 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 00:35:17.804320   59728 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 00:35:17.804351   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I1101 00:35:19.314218   59728 docker.go:663] Took 1.514240 seconds to copy over tarball
	I1101 00:35:19.314284   59728 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 00:35:18.728947   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:18.729510   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:18.729539   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:18.729463   60191 retry.go:31] will retry after 621.458884ms: waiting for machine to come up
	I1101 00:35:19.352351   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:19.352862   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:19.352885   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:19.352818   60191 retry.go:31] will retry after 1.047159856s: waiting for machine to come up
	I1101 00:35:20.402011   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:20.402707   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:20.402742   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:20.402671   60191 retry.go:31] will retry after 1.273372197s: waiting for machine to come up
	I1101 00:35:21.677582   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:21.678109   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:21.678138   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:21.678081   60191 retry.go:31] will retry after 1.545852712s: waiting for machine to come up
	I1101 00:35:23.225992   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:23.226570   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:23.226603   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:23.226515   60191 retry.go:31] will retry after 2.242582496s: waiting for machine to come up
	I1101 00:35:21.801438   59728 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.487132258s)
	I1101 00:35:21.801463   59728 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 00:35:21.846235   59728 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1101 00:35:21.856435   59728 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3100 bytes)
	I1101 00:35:21.872387   59728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:35:21.987914   59728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 00:35:23.977331   59728 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.989377575s)
	I1101 00:35:23.977428   59728 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:35:24.002030   59728 docker.go:699] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:3.1
	
	-- /stdout --
	I1101 00:35:24.002074   59728 docker.go:705] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I1101 00:35:24.002081   59728 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 00:35:24.003931   59728 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:35:24.003948   59728 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1101 00:35:24.003951   59728 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 00:35:24.003995   59728 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 00:35:24.003933   59728 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 00:35:24.003928   59728 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1101 00:35:24.003928   59728 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1101 00:35:24.004212   59728 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 00:35:24.004942   59728 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 00:35:24.004963   59728 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 00:35:24.004967   59728 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 00:35:24.004971   59728 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1101 00:35:24.004944   59728 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1101 00:35:24.004998   59728 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 00:35:24.005044   59728 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:35:24.005054   59728 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1101 00:35:24.154700   59728 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1101 00:35:24.157786   59728 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 00:35:24.169544   59728 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1101 00:35:24.171003   59728 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1101 00:35:24.173906   59728 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1101 00:35:24.174051   59728 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1101 00:35:24.177391   59728 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1101 00:35:24.217167   59728 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1101 00:35:24.217226   59728 docker.go:324] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1101 00:35:24.217244   59728 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1101 00:35:24.217274   59728 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1101 00:35:24.217289   59728 docker.go:324] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 00:35:24.217335   59728 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1101 00:35:24.261867   59728 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1101 00:35:24.261939   59728 docker.go:324] Removing image: registry.k8s.io/coredns:1.6.2
	I1101 00:35:24.261954   59728 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1101 00:35:24.261989   59728 docker.go:324] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1101 00:35:24.261993   59728 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I1101 00:35:24.262113   59728 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1101 00:35:24.262149   59728 docker.go:324] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1101 00:35:24.262091   59728 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I1101 00:35:24.262198   59728 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1101 00:35:24.270118   59728 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1101 00:35:24.270169   59728 docker.go:324] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1101 00:35:24.270219   59728 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I1101 00:35:24.290875   59728 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1101 00:35:24.290890   59728 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1101 00:35:24.325561   59728 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1101 00:35:24.325571   59728 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1101 00:35:24.325615   59728 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1101 00:35:24.325687   59728 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1101 00:35:24.612148   59728 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:35:24.630986   59728 cache_images.go:92] LoadImages completed in 628.888247ms
	W1101 00:35:24.631086   59728 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I1101 00:35:24.631244   59728 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 00:35:24.657737   59728 cni.go:84] Creating CNI manager for ""
	I1101 00:35:24.657768   59728 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1101 00:35:24.657797   59728 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:35:24.657824   59728 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-993392 NodeName:old-k8s-version-993392 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1101 00:35:24.658021   59728 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-993392"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-993392
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:35:24.658137   59728 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-993392 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-993392 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:35:24.658205   59728 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1101 00:35:24.667621   59728 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:35:24.667709   59728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:35:24.677985   59728 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
	I1101 00:35:24.695180   59728 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:35:24.712230   59728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I1101 00:35:24.731280   59728 ssh_runner.go:195] Run: grep 192.168.39.70	control-plane.minikube.internal$ /etc/hosts
	I1101 00:35:24.734826   59728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:35:24.748889   59728 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392 for IP: 192.168.39.70
	I1101 00:35:24.748935   59728 certs.go:190] acquiring lock for shared ca certs: {Name:mkd78a553474b872bb63abf547b6fa0a317dc3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:35:24.749117   59728 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key
	I1101 00:35:24.749177   59728 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key
	I1101 00:35:24.749280   59728 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.key
	I1101 00:35:24.749361   59728 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/apiserver.key.5467de6f
	I1101 00:35:24.749413   59728 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/proxy-client.key
	I1101 00:35:24.749568   59728 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem (1338 bytes)
	W1101 00:35:24.749608   59728 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463_empty.pem, impossibly tiny 0 bytes
	I1101 00:35:24.749624   59728 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:35:24.749672   59728 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem (1082 bytes)
	I1101 00:35:24.749715   59728 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:35:24.749746   59728 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem (1675 bytes)
	I1101 00:35:24.749806   59728 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:35:24.750707   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:35:24.777202   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 00:35:24.799061   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:35:24.820727   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 00:35:24.845248   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:35:24.869551   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 00:35:24.893612   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:35:24.919831   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:35:24.942704   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:35:24.967826   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem --> /usr/share/ca-certificates/14463.pem (1338 bytes)
	I1101 00:35:24.991012   59728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /usr/share/ca-certificates/144632.pem (1708 bytes)
	I1101 00:35:25.012363   59728 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:35:25.029684   59728 ssh_runner.go:195] Run: openssl version
	I1101 00:35:25.036583   59728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:35:25.046638   59728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:35:25.052435   59728 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:35:25.052499   59728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:35:25.059364   59728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:35:25.072232   59728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14463.pem && ln -fs /usr/share/ca-certificates/14463.pem /etc/ssl/certs/14463.pem"
	I1101 00:35:25.082389   59728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14463.pem
	I1101 00:35:25.087229   59728 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:48 /usr/share/ca-certificates/14463.pem
	I1101 00:35:25.087307   59728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14463.pem
	I1101 00:35:25.093247   59728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14463.pem /etc/ssl/certs/51391683.0"
	I1101 00:35:25.470849   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:25.471388   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:25.471410   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:25.471353   60191 retry.go:31] will retry after 2.724489482s: waiting for machine to come up
	I1101 00:35:28.199328   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:28.199850   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:28.199876   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:28.199812   60191 retry.go:31] will retry after 2.369452725s: waiting for machine to come up
	I1101 00:35:25.106028   59728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144632.pem && ln -fs /usr/share/ca-certificates/144632.pem /etc/ssl/certs/144632.pem"
	I1101 00:35:25.117553   59728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144632.pem
	I1101 00:35:25.122339   59728 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:48 /usr/share/ca-certificates/144632.pem
	I1101 00:35:25.122417   59728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144632.pem
	I1101 00:35:25.128141   59728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144632.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:35:25.138655   59728 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:35:25.143298   59728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 00:35:25.149232   59728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 00:35:25.158024   59728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 00:35:25.165517   59728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 00:35:25.172683   59728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 00:35:25.180148   59728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 00:35:25.185755   59728 kubeadm.go:404] StartCluster: {Name:old-k8s-version-993392 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-993392 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:35:25.185928   59728 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 00:35:25.204064   59728 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 00:35:25.213306   59728 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 00:35:25.213334   59728 kubeadm.go:636] restartCluster start
	I1101 00:35:25.213402   59728 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 00:35:25.221838   59728 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:25.222243   59728 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-993392" does not appear in /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:35:25.222336   59728 kubeconfig.go:146] "old-k8s-version-993392" context is missing from /home/jenkins/minikube-integration/17486-7251/kubeconfig - will repair!
	I1101 00:35:25.222631   59728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:35:25.223958   59728 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 00:35:25.232104   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:25.232164   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:25.245405   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:25.245431   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:25.245481   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:25.255645   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:25.756427   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:25.756496   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:25.767627   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:26.256174   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:26.256256   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:26.267584   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:26.755866   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:26.755988   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:26.768093   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:27.256696   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:27.256792   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:27.274324   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:27.755833   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:27.755926   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:27.766992   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:28.256544   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:28.256660   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:28.268506   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:28.755993   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:28.756061   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:28.769736   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:29.256581   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:29.256683   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:29.267929   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:29.756586   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:29.756711   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:29.768260   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:30.570441   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:30.570878   59907 main.go:141] libmachine: (no-preload-658664) DBG | unable to find current IP address of domain no-preload-658664 in network mk-no-preload-658664
	I1101 00:35:30.570901   59907 main.go:141] libmachine: (no-preload-658664) DBG | I1101 00:35:30.570823   60191 retry.go:31] will retry after 3.027439383s: waiting for machine to come up
	I1101 00:35:30.256534   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:30.256628   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:30.268788   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:30.756408   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:30.756496   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:30.768311   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:31.255843   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:31.255953   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:31.267609   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:31.756174   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:31.756294   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:31.769281   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:32.255860   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:32.255937   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:32.267668   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:32.756256   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:32.756330   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:32.768141   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:33.255736   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:33.255828   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:33.267984   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:33.756217   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:33.756308   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:33.767715   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:34.256750   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:34.256857   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:34.267855   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:34.756566   59728 api_server.go:166] Checking apiserver status ...
	I1101 00:35:34.756655   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:34.768313   59728 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:35.819988   60028 start.go:369] acquired machines lock for "default-k8s-diff-port-195256" in 31.110018274s
	I1101 00:35:35.820050   60028 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:35:35.820062   60028 fix.go:54] fixHost starting: 
	I1101 00:35:35.820481   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:35.820538   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:35.837497   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I1101 00:35:35.837905   60028 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:35.838373   60028 main.go:141] libmachine: Using API Version  1
	I1101 00:35:35.838402   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:35.838732   60028 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:35.838922   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:35:35.839088   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetState
	I1101 00:35:35.840507   60028 fix.go:102] recreateIfNeeded on default-k8s-diff-port-195256: state=Stopped err=<nil>
	I1101 00:35:35.840531   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	W1101 00:35:35.840688   60028 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:35:35.842752   60028 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-195256" ...
	I1101 00:35:33.602034   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.602571   59907 main.go:141] libmachine: (no-preload-658664) Found IP for machine: 192.168.50.197
	I1101 00:35:33.602593   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has current primary IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.602602   59907 main.go:141] libmachine: (no-preload-658664) Reserving static IP address...
	I1101 00:35:33.603110   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "no-preload-658664", mac: "52:54:00:9b:37:ac", ip: "192.168.50.197"} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:33.603145   59907 main.go:141] libmachine: (no-preload-658664) DBG | skip adding static IP to network mk-no-preload-658664 - found existing host DHCP lease matching {name: "no-preload-658664", mac: "52:54:00:9b:37:ac", ip: "192.168.50.197"}
	I1101 00:35:33.603154   59907 main.go:141] libmachine: (no-preload-658664) Reserved static IP address: 192.168.50.197
	I1101 00:35:33.603172   59907 main.go:141] libmachine: (no-preload-658664) Waiting for SSH to be available...
	I1101 00:35:33.603184   59907 main.go:141] libmachine: (no-preload-658664) DBG | Getting to WaitForSSH function...
	I1101 00:35:33.605167   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.605500   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:33.605537   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.605645   59907 main.go:141] libmachine: (no-preload-658664) DBG | Using SSH client type: external
	I1101 00:35:33.605677   59907 main.go:141] libmachine: (no-preload-658664) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/no-preload-658664/id_rsa (-rw-------)
	I1101 00:35:33.605701   59907 main.go:141] libmachine: (no-preload-658664) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7251/.minikube/machines/no-preload-658664/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 00:35:33.605713   59907 main.go:141] libmachine: (no-preload-658664) DBG | About to run SSH command:
	I1101 00:35:33.605733   59907 main.go:141] libmachine: (no-preload-658664) DBG | exit 0
	I1101 00:35:33.734452   59907 main.go:141] libmachine: (no-preload-658664) DBG | SSH cmd err, output: <nil>: 
	I1101 00:35:33.734867   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetConfigRaw
	I1101 00:35:33.735526   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetIP
	I1101 00:35:33.737745   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.738091   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:33.738121   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.738396   59907 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/no-preload-658664/config.json ...
	I1101 00:35:33.738616   59907 machine.go:88] provisioning docker machine ...
	I1101 00:35:33.738635   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:35:33.738830   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetMachineName
	I1101 00:35:33.739043   59907 buildroot.go:166] provisioning hostname "no-preload-658664"
	I1101 00:35:33.739063   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetMachineName
	I1101 00:35:33.739196   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:33.741315   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.741648   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:33.741678   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.741836   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:33.742011   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:33.742166   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:33.742301   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:33.742452   59907 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:33.742833   59907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I1101 00:35:33.742851   59907 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-658664 && echo "no-preload-658664" | sudo tee /etc/hostname
	I1101 00:35:33.866166   59907 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-658664
	
	I1101 00:35:33.866203   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:33.869100   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.869420   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:33.869459   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.869640   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:33.869838   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:33.870018   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:33.870143   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:33.870301   59907 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:33.870700   59907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I1101 00:35:33.870726   59907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-658664' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-658664/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-658664' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:35:33.985844   59907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:35:33.985872   59907 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7251/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7251/.minikube}
	I1101 00:35:33.985900   59907 buildroot.go:174] setting up certificates
	I1101 00:35:33.985928   59907 provision.go:83] configureAuth start
	I1101 00:35:33.985946   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetMachineName
	I1101 00:35:33.986301   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetIP
	I1101 00:35:33.988782   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.989049   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:33.989085   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.989252   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:33.991828   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.992196   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:33.992227   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:33.992385   59907 provision.go:138] copyHostCerts
	I1101 00:35:33.992456   59907 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem, removing ...
	I1101 00:35:33.992470   59907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
	I1101 00:35:33.992537   59907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem (1675 bytes)
	I1101 00:35:33.992644   59907 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem, removing ...
	I1101 00:35:33.992656   59907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
	I1101 00:35:33.992692   59907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem (1082 bytes)
	I1101 00:35:33.992767   59907 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem, removing ...
	I1101 00:35:33.992777   59907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
	I1101 00:35:33.992803   59907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem (1123 bytes)
	I1101 00:35:33.992861   59907 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem org=jenkins.no-preload-658664 san=[192.168.50.197 192.168.50.197 localhost 127.0.0.1 minikube no-preload-658664]
	I1101 00:35:34.291135   59907 provision.go:172] copyRemoteCerts
	I1101 00:35:34.291200   59907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:35:34.291240   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:34.293878   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:34.294234   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:34.294263   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:34.294426   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:34.294652   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:34.294854   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:34.295026   59907 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/no-preload-658664/id_rsa Username:docker}
	I1101 00:35:34.379099   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 00:35:34.401018   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 00:35:34.422743   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 00:35:34.444295   59907 provision.go:86] duration metric: configureAuth took 458.352591ms
	I1101 00:35:34.444323   59907 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:35:34.444514   59907 config.go:182] Loaded profile config "no-preload-658664": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:35:34.444556   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:35:34.444836   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:34.447241   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:34.447587   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:34.447627   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:34.447743   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:34.447933   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:34.448115   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:34.448270   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:34.448437   59907 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:34.448810   59907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I1101 00:35:34.448824   59907 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 00:35:34.560172   59907 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1101 00:35:34.560201   59907 buildroot.go:70] root file system type: tmpfs
	I1101 00:35:34.560308   59907 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 00:35:34.560330   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:34.562972   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:34.563317   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:34.563351   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:34.563544   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:34.563759   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:34.563920   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:34.564056   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:34.564284   59907 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:34.564620   59907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I1101 00:35:34.564679   59907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 00:35:34.690890   59907 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 00:35:34.690924   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:34.693227   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:34.693590   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:34.693611   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:34.693787   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:34.693977   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:34.694160   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:34.694315   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:34.694481   59907 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:34.694819   59907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I1101 00:35:34.694839   59907 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 00:35:35.577385   59907 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1101 00:35:35.577418   59907 machine.go:91] provisioned docker machine in 1.838786398s
	I1101 00:35:35.577431   59907 start.go:300] post-start starting for "no-preload-658664" (driver="kvm2")
	I1101 00:35:35.577443   59907 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:35:35.577464   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:35:35.577790   59907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:35:35.577842   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:35.580774   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:35.581119   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:35.581141   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:35.581337   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:35.581562   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:35.581742   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:35.581884   59907 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/no-preload-658664/id_rsa Username:docker}
	I1101 00:35:35.667716   59907 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:35:35.671757   59907 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:35:35.671784   59907 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/addons for local assets ...
	I1101 00:35:35.671855   59907 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/files for local assets ...
	I1101 00:35:35.671960   59907 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> 144632.pem in /etc/ssl/certs
	I1101 00:35:35.672089   59907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:35:35.679952   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:35:35.702486   59907 start.go:303] post-start completed in 125.005399ms
	I1101 00:35:35.702532   59907 fix.go:56] fixHost completed within 20.411112318s
	I1101 00:35:35.702557   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:35.705430   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:35.705765   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:35.705795   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:35.705941   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:35.706208   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:35.706375   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:35.706574   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:35.706765   59907 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:35.707228   59907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.197 22 <nil> <nil>}
	I1101 00:35:35.707252   59907 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:35:35.819816   59907 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698798935.799761761
	
	I1101 00:35:35.819840   59907 fix.go:206] guest clock: 1698798935.799761761
	I1101 00:35:35.819850   59907 fix.go:219] Guest: 2023-11-01 00:35:35.799761761 +0000 UTC Remote: 2023-11-01 00:35:35.702536664 +0000 UTC m=+37.267788910 (delta=97.225097ms)
	I1101 00:35:35.819898   59907 fix.go:190] guest clock delta is within tolerance: 97.225097ms
	I1101 00:35:35.819906   59907 start.go:83] releasing machines lock for "no-preload-658664", held for 20.528517635s
	I1101 00:35:35.819938   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:35:35.820191   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetIP
	I1101 00:35:35.823135   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:35.823523   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:35.823553   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:35.823717   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:35:35.824206   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:35:35.824385   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:35:35.824464   59907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:35:35.824504   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:35.824587   59907 ssh_runner.go:195] Run: cat /version.json
	I1101 00:35:35.824617   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:35.827351   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:35.827743   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:35.827778   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:35.827810   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:35.827994   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:35.828082   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:35.828112   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:35.828157   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:35.828220   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:35.828312   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:35.828381   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:35.828452   59907 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/no-preload-658664/id_rsa Username:docker}
	I1101 00:35:35.828522   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:35.828651   59907 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/no-preload-658664/id_rsa Username:docker}
	I1101 00:35:35.912036   59907 ssh_runner.go:195] Run: systemctl --version
	I1101 00:35:35.939416   59907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 00:35:35.944796   59907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:35:35.944872   59907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:35:35.963635   59907 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 00:35:35.963672   59907 start.go:472] detecting cgroup driver to use...
	I1101 00:35:35.963829   59907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:35:35.983269   59907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1101 00:35:35.993413   59907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1101 00:35:36.004665   59907 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1101 00:35:36.004748   59907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1101 00:35:36.014292   59907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:35:36.024767   59907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1101 00:35:36.034683   59907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:35:36.044316   59907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:35:36.056198   59907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1101 00:35:36.067590   59907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:35:36.077401   59907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:35:36.087617   59907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:35:36.209724   59907 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 00:35:36.230210   59907 start.go:472] detecting cgroup driver to use...
	I1101 00:35:36.230292   59907 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 00:35:36.249772   59907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:35:36.267064   59907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:35:36.289131   59907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:35:36.304310   59907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:35:36.315882   59907 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 00:35:36.349233   59907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:35:36.363771   59907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:35:36.383346   59907 ssh_runner.go:195] Run: which cri-dockerd
	I1101 00:35:36.387301   59907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1101 00:35:36.395798   59907 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1101 00:35:36.412428   59907 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 00:35:36.550934   59907 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 00:35:36.670824   59907 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1101 00:35:36.670942   59907 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1101 00:35:36.689496   59907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:35:36.818660   59907 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 00:35:38.346976   59907 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.52827492s)
	I1101 00:35:38.347064   59907 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:35:38.484720   59907 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1101 00:35:38.626060   59907 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:35:38.757660   59907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:35:38.872810   59907 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1101 00:35:38.893627   59907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:35:39.017382   59907 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1101 00:35:39.114721   59907 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1101 00:35:39.114810   59907 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1101 00:35:39.123033   59907 start.go:540] Will wait 60s for crictl version
	I1101 00:35:39.123100   59907 ssh_runner.go:195] Run: which crictl
	I1101 00:35:39.127925   59907 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:35:39.186910   59907 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1101 00:35:39.186986   59907 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 00:35:39.220813   59907 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 00:35:35.844164   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .Start
	I1101 00:35:35.844332   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Ensuring networks are active...
	I1101 00:35:35.845182   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Ensuring network default is active
	I1101 00:35:35.845617   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Ensuring network mk-default-k8s-diff-port-195256 is active
	I1101 00:35:35.846065   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Getting domain xml...
	I1101 00:35:35.846794   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Creating domain...
	I1101 00:35:37.173376   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Waiting to get IP...
	I1101 00:35:37.174495   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:37.174983   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:37.175011   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:37.174918   60335 retry.go:31] will retry after 241.620666ms: waiting for machine to come up
	I1101 00:35:37.418659   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:37.419384   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:37.419414   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:37.419320   60335 retry.go:31] will retry after 241.05432ms: waiting for machine to come up
	I1101 00:35:37.661819   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:37.662405   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:37.662432   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:37.662340   60335 retry.go:31] will retry after 336.320372ms: waiting for machine to come up
	I1101 00:35:38.000750   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:38.001329   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:38.001359   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:38.001272   60335 retry.go:31] will retry after 599.183429ms: waiting for machine to come up
	I1101 00:35:38.601867   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:38.602574   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:38.602620   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:38.602513   60335 retry.go:31] will retry after 621.087068ms: waiting for machine to come up
	I1101 00:35:39.225041   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:39.225500   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:39.225531   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:39.225453   60335 retry.go:31] will retry after 713.501645ms: waiting for machine to come up
	I1101 00:35:35.232397   59728 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 00:35:35.232422   59728 kubeadm.go:1128] stopping kube-system containers ...
	I1101 00:35:35.232500   59728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 00:35:35.256313   59728 docker.go:470] Stopping containers: [481ccdd6cf9b 7f9e3a80cf43 1681355536d2 3728eb5308a4 ee0792901fa8 39d41f6a87ff 5a44ee4d63c5 7db6b38a93c6 dd5ef506a5c9 8e381d619a02 444c0ced130a a28712848ac8 1172eb49ab03 649abe186bed 0b28588e65e3 4cd70b650b68]
	I1101 00:35:35.256408   59728 ssh_runner.go:195] Run: docker stop 481ccdd6cf9b 7f9e3a80cf43 1681355536d2 3728eb5308a4 ee0792901fa8 39d41f6a87ff 5a44ee4d63c5 7db6b38a93c6 dd5ef506a5c9 8e381d619a02 444c0ced130a a28712848ac8 1172eb49ab03 649abe186bed 0b28588e65e3 4cd70b650b68
	I1101 00:35:35.279814   59728 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 00:35:35.295647   59728 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 00:35:35.304894   59728 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:35:35.304970   59728 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 00:35:35.313868   59728 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 00:35:35.313897   59728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:35:35.440210   59728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:35:36.708046   59728 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.267795259s)
	I1101 00:35:36.708080   59728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:35:36.937584   59728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:35:37.021820   59728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:35:37.154130   59728 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:35:37.154212   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:35:37.170161   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:35:37.683503   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:35:38.182909   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:35:38.683655   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:35:39.183847   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:35:39.209379   59728 api_server.go:72] duration metric: took 2.055248147s to wait for apiserver process to appear ...
	I1101 00:35:39.209404   59728 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:35:39.209423   59728 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1101 00:35:39.256059   59907 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1101 00:35:39.256155   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetIP
	I1101 00:35:39.259127   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:39.259593   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:39.259632   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:39.259921   59907 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1101 00:35:39.265109   59907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:35:39.279993   59907 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1101 00:35:39.280043   59907 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:35:39.306106   59907 docker.go:699] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1101 00:35:39.306132   59907 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:35:39.306199   59907 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 00:35:39.338115   59907 cni.go:84] Creating CNI manager for ""
	I1101 00:35:39.338143   59907 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 00:35:39.338162   59907 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:35:39.338194   59907 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.197 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-658664 NodeName:no-preload-658664 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:35:39.338355   59907 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-658664"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:35:39.338464   59907 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=no-preload-658664 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-658664 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:35:39.338545   59907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:35:39.350903   59907 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:35:39.350990   59907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:35:39.362851   59907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1101 00:35:39.383285   59907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:35:39.401524   59907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1101 00:35:39.423477   59907 ssh_runner.go:195] Run: grep 192.168.50.197	control-plane.minikube.internal$ /etc/hosts
	I1101 00:35:39.427552   59907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:35:39.441754   59907 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/no-preload-658664 for IP: 192.168.50.197
	I1101 00:35:39.441798   59907 certs.go:190] acquiring lock for shared ca certs: {Name:mkd78a553474b872bb63abf547b6fa0a317dc3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:35:39.441955   59907 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key
	I1101 00:35:39.441993   59907 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key
	I1101 00:35:39.442080   59907 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/no-preload-658664/client.key
	I1101 00:35:39.442136   59907 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/no-preload-658664/apiserver.key.d719e12a
	I1101 00:35:39.442171   59907 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/no-preload-658664/proxy-client.key
	I1101 00:35:39.442278   59907 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem (1338 bytes)
	W1101 00:35:39.442308   59907 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463_empty.pem, impossibly tiny 0 bytes
	I1101 00:35:39.442315   59907 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:35:39.442343   59907 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem (1082 bytes)
	I1101 00:35:39.442368   59907 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:35:39.442392   59907 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem (1675 bytes)
	I1101 00:35:39.442431   59907 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:35:39.443108   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/no-preload-658664/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:35:39.471807   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/no-preload-658664/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 00:35:39.499390   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/no-preload-658664/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:35:39.529945   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/no-preload-658664/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 00:35:39.560154   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:35:39.589522   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 00:35:39.618605   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:35:39.646669   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:35:39.672063   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:35:39.695895   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem --> /usr/share/ca-certificates/14463.pem (1338 bytes)
	I1101 00:35:39.722089   59907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /usr/share/ca-certificates/144632.pem (1708 bytes)
	I1101 00:35:39.748081   59907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:35:39.765999   59907 ssh_runner.go:195] Run: openssl version
	I1101 00:35:39.773045   59907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:35:39.783854   59907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:35:39.789593   59907 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:35:39.789692   59907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:35:39.796666   59907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:35:39.808450   59907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14463.pem && ln -fs /usr/share/ca-certificates/14463.pem /etc/ssl/certs/14463.pem"
	I1101 00:35:39.818992   59907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14463.pem
	I1101 00:35:39.824984   59907 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:48 /usr/share/ca-certificates/14463.pem
	I1101 00:35:39.825127   59907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14463.pem
	I1101 00:35:39.832165   59907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14463.pem /etc/ssl/certs/51391683.0"
	I1101 00:35:39.845919   59907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144632.pem && ln -fs /usr/share/ca-certificates/144632.pem /etc/ssl/certs/144632.pem"
	I1101 00:35:39.859095   59907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144632.pem
	I1101 00:35:39.864478   59907 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:48 /usr/share/ca-certificates/144632.pem
	I1101 00:35:39.864556   59907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144632.pem
	I1101 00:35:39.870652   59907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144632.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:35:39.883349   59907 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:35:39.888380   59907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 00:35:39.895562   59907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 00:35:39.901537   59907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 00:35:39.907444   59907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 00:35:39.913424   59907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 00:35:39.921176   59907 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 00:35:39.928230   59907 kubeadm.go:404] StartCluster: {Name:no-preload-658664 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-658664 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.197 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Net
work: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:35:39.928397   59907 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 00:35:39.948302   59907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 00:35:39.958394   59907 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 00:35:39.958422   59907 kubeadm.go:636] restartCluster start
	I1101 00:35:39.958492   59907 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 00:35:39.967551   59907 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:39.968208   59907 kubeconfig.go:135] verify returned: extract IP: "no-preload-658664" does not appear in /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:35:39.968456   59907 kubeconfig.go:146] "no-preload-658664" context is missing from /home/jenkins/minikube-integration/17486-7251/kubeconfig - will repair!
	I1101 00:35:39.968932   59907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:35:39.970218   59907 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 00:35:39.979302   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:39.979361   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:39.990422   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:39.990442   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:39.990493   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:40.005612   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:40.506693   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:40.506765   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:40.519901   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:41.006586   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:41.006678   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:41.020080   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:41.506699   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:41.506793   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:41.520054   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:42.006643   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:42.006748   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:42.024050   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:42.506662   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:42.506766   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:42.522807   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:43.006274   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:43.006355   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:43.019283   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:39.940879   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:39.941412   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:39.941562   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:39.941494   60335 retry.go:31] will retry after 785.565574ms: waiting for machine to come up
	I1101 00:35:40.728494   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:40.729018   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:40.729065   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:40.728986   60335 retry.go:31] will retry after 1.298415309s: waiting for machine to come up
	I1101 00:35:42.028456   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:42.029005   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:42.029032   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:42.028962   60335 retry.go:31] will retry after 1.35969985s: waiting for machine to come up
	I1101 00:35:43.390380   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:43.390838   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:43.390869   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:43.390779   60335 retry.go:31] will retry after 1.70288549s: waiting for machine to come up
	I1101 00:35:44.210619   59728 api_server.go:269] stopped: https://192.168.39.70:8443/healthz: Get "https://192.168.39.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 00:35:44.210666   59728 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1101 00:35:44.520286   59728 api_server.go:279] https://192.168.39.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 00:35:44.520326   59728 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 00:35:45.020762   59728 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1101 00:35:45.028363   59728 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1101 00:35:45.028419   59728 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1101 00:35:43.506098   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:43.506197   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:43.518536   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:44.005929   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:44.006030   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:44.018156   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:44.505752   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:44.505841   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:44.523182   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:45.006690   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:45.006810   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:45.019522   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:45.506655   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:45.506723   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:45.524399   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:46.005999   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:46.006109   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:46.018399   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:46.505908   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:46.505999   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:46.522934   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:47.006574   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:47.006657   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:47.022899   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:47.506496   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:47.506636   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:47.522571   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:48.005787   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:48.005857   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:48.018511   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:45.521386   59728 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1101 00:35:45.531868   59728 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1101 00:35:45.531904   59728 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1101 00:35:46.020473   59728 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1101 00:35:46.027420   59728 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1101 00:35:46.036146   59728 api_server.go:141] control plane version: v1.16.0
	I1101 00:35:46.036180   59728 api_server.go:131] duration metric: took 6.826768839s to wait for apiserver health ...
	I1101 00:35:46.036192   59728 cni.go:84] Creating CNI manager for ""
	I1101 00:35:46.036208   59728 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1101 00:35:46.036217   59728 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:35:46.044931   59728 system_pods.go:59] 8 kube-system pods found
	I1101 00:35:46.044968   59728 system_pods.go:61] "coredns-5644d7b6d9-67f7c" [2d312387-7c72-428b-807c-3a200439f116] Running
	I1101 00:35:46.044976   59728 system_pods.go:61] "coredns-5644d7b6d9-kj7pf" [329b83a0-af26-47d9-b44a-d2af4cb4abab] Running
	I1101 00:35:46.044983   59728 system_pods.go:61] "etcd-old-k8s-version-993392" [7eefc8f6-b708-4d05-849a-8d15a4cabb86] Running
	I1101 00:35:46.044998   59728 system_pods.go:61] "kube-apiserver-old-k8s-version-993392" [e646f5fb-7a3e-4db5-b5f8-d255bc946d12] Running
	I1101 00:35:46.045007   59728 system_pods.go:61] "kube-controller-manager-old-k8s-version-993392" [663a0c13-d3ae-46aa-85a7-b1cca0995a50] Pending
	I1101 00:35:46.045018   59728 system_pods.go:61] "kube-proxy-6qzxd" [938e4a3a-f590-426f-9856-62d7307d3d75] Running
	I1101 00:35:46.045024   59728 system_pods.go:61] "kube-scheduler-old-k8s-version-993392" [315d1110-59c8-4133-abab-65aba4e1304c] Running
	I1101 00:35:46.045034   59728 system_pods.go:61] "storage-provisioner" [da85e132-ee90-421b-8e89-8804f7bb59ca] Running
	I1101 00:35:46.045046   59728 system_pods.go:74] duration metric: took 8.821705ms to wait for pod list to return data ...
	I1101 00:35:46.045059   59728 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:35:46.048738   59728 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:35:46.048767   59728 node_conditions.go:123] node cpu capacity is 2
	I1101 00:35:46.048777   59728 node_conditions.go:105] duration metric: took 3.710195ms to run NodePressure ...
	I1101 00:35:46.048795   59728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:35:46.475898   59728 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 00:35:46.481001   59728 kubeadm.go:787] kubelet initialised
	I1101 00:35:46.481029   59728 kubeadm.go:788] duration metric: took 5.103265ms waiting for restarted kubelet to initialise ...
	I1101 00:35:46.481039   59728 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:35:46.493275   59728 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:46.509716   59728 pod_ready.go:92] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"True"
	I1101 00:35:46.509743   59728 pod_ready.go:81] duration metric: took 16.434881ms waiting for pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:46.509756   59728 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kj7pf" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:46.525114   59728 pod_ready.go:92] pod "coredns-5644d7b6d9-kj7pf" in "kube-system" namespace has status "Ready":"True"
	I1101 00:35:46.525137   59728 pod_ready.go:81] duration metric: took 15.373522ms waiting for pod "coredns-5644d7b6d9-kj7pf" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:46.525152   59728 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:46.532743   59728 pod_ready.go:92] pod "etcd-old-k8s-version-993392" in "kube-system" namespace has status "Ready":"True"
	I1101 00:35:46.532763   59728 pod_ready.go:81] duration metric: took 7.602958ms waiting for pod "etcd-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:46.532774   59728 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:46.550114   59728 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-993392" in "kube-system" namespace has status "Ready":"True"
	I1101 00:35:46.550158   59728 pod_ready.go:81] duration metric: took 17.375997ms waiting for pod "kube-apiserver-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:46.550173   59728 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:47.793913   59728 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-993392" in "kube-system" namespace has status "Ready":"True"
	I1101 00:35:47.793945   59728 pod_ready.go:81] duration metric: took 1.243762863s waiting for pod "kube-controller-manager-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:47.793959   59728 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6qzxd" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:48.081483   59728 pod_ready.go:92] pod "kube-proxy-6qzxd" in "kube-system" namespace has status "Ready":"True"
	I1101 00:35:48.081513   59728 pod_ready.go:81] duration metric: took 287.546454ms waiting for pod "kube-proxy-6qzxd" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:48.081533   59728 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:48.479918   59728 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-993392" in "kube-system" namespace has status "Ready":"True"
	I1101 00:35:48.479942   59728 pod_ready.go:81] duration metric: took 398.399812ms waiting for pod "kube-scheduler-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:48.479955   59728 pod_ready.go:38] duration metric: took 1.998905143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:35:48.479975   59728 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 00:35:48.491169   59728 ops.go:34] apiserver oom_adj: -16
	I1101 00:35:48.491195   59728 kubeadm.go:640] restartCluster took 23.277853394s
	I1101 00:35:48.491207   59728 kubeadm.go:406] StartCluster complete in 23.305459978s
	I1101 00:35:48.491229   59728 settings.go:142] acquiring lock: {Name:mk57c659cffa0c6a1b184e5906c662f85ff8a099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:35:48.491350   59728 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:35:48.492247   59728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:35:48.492462   59728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 00:35:48.492610   59728 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 00:35:48.492675   59728 config.go:182] Loaded profile config "old-k8s-version-993392": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1101 00:35:48.492705   59728 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-993392"
	I1101 00:35:48.492718   59728 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-993392"
	I1101 00:35:48.492730   59728 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-993392"
	I1101 00:35:48.492738   59728 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-993392"
	I1101 00:35:48.492749   59728 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-993392"
	I1101 00:35:48.492741   59728 cache.go:107] acquiring lock: {Name:mkc5ed527821f669fe42d90dc96f9db56fa3565a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:35:48.492761   59728 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-993392"
	W1101 00:35:48.492770   59728 addons.go:240] addon metrics-server should already be in state true
	I1101 00:35:48.492802   59728 cache.go:115] /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1101 00:35:48.492810   59728 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 75.161µs
	I1101 00:35:48.492817   59728 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1101 00:35:48.492817   59728 host.go:66] Checking if "old-k8s-version-993392" exists ...
	I1101 00:35:48.492823   59728 cache.go:87] Successfully saved all images to host disk.
	I1101 00:35:48.492968   59728 config.go:182] Loaded profile config "old-k8s-version-993392": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1101 00:35:48.493211   59728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:48.493227   59728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:48.493232   59728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:48.493243   59728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:48.493251   59728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:48.493268   59728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:48.493308   59728 addons.go:69] Setting dashboard=true in profile "old-k8s-version-993392"
	W1101 00:35:48.492739   59728 addons.go:240] addon storage-provisioner should already be in state true
	I1101 00:35:48.493320   59728 addons.go:231] Setting addon dashboard=true in "old-k8s-version-993392"
	W1101 00:35:48.493326   59728 addons.go:240] addon dashboard should already be in state true
	I1101 00:35:48.493355   59728 host.go:66] Checking if "old-k8s-version-993392" exists ...
	I1101 00:35:48.493355   59728 host.go:66] Checking if "old-k8s-version-993392" exists ...
	I1101 00:35:48.493678   59728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:48.493712   59728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:48.493719   59728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:48.493740   59728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:48.510997   59728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33143
	I1101 00:35:48.511013   59728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I1101 00:35:48.511657   59728 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:48.512145   59728 main.go:141] libmachine: Using API Version  1
	I1101 00:35:48.512177   59728 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:48.512445   59728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I1101 00:35:48.512668   59728 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:48.512526   59728 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:48.513011   59728 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:48.513202   59728 main.go:141] libmachine: Using API Version  1
	I1101 00:35:48.513219   59728 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:48.513353   59728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:48.513419   59728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:48.513543   59728 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:48.513738   59728 main.go:141] libmachine: Using API Version  1
	I1101 00:35:48.513754   59728 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:48.513821   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetState
	I1101 00:35:48.514063   59728 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:48.514242   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetState
	I1101 00:35:48.516028   59728 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-993392" context rescaled to 1 replicas
	I1101 00:35:48.516064   59728 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 00:35:48.519293   59728 out.go:177] * Verifying Kubernetes components...
	I1101 00:35:48.516642   59728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:48.517421   59728 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-993392"
	W1101 00:35:48.520817   59728 addons.go:240] addon default-storageclass should already be in state true
	I1101 00:35:48.520849   59728 host.go:66] Checking if "old-k8s-version-993392" exists ...
	I1101 00:35:48.520970   59728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:35:48.521140   59728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:48.521328   59728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:48.521367   59728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:48.535589   59728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I1101 00:35:48.536011   59728 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:48.536696   59728 main.go:141] libmachine: Using API Version  1
	I1101 00:35:48.536724   59728 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:48.537098   59728 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:48.537279   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetState
	I1101 00:35:48.539192   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .DriverName
	I1101 00:35:48.541112   59728 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 00:35:48.542869   59728 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 00:35:48.542892   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 00:35:48.542914   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:48.546699   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:48.547360   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:48.547386   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:48.547569   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:48.548567   59728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I1101 00:35:48.549035   59728 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:48.549640   59728 main.go:141] libmachine: Using API Version  1
	I1101 00:35:48.549665   59728 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:48.550038   59728 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:48.550659   59728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:48.550701   59728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:48.550940   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:48.551132   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:48.551307   59728 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/old-k8s-version-993392/id_rsa Username:docker}
	I1101 00:35:48.562654   59728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46821
	I1101 00:35:48.562688   59728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37845
	I1101 00:35:48.562661   59728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43127
	I1101 00:35:48.563183   59728 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:48.563797   59728 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:48.563930   59728 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:48.563982   59728 main.go:141] libmachine: Using API Version  1
	I1101 00:35:48.563997   59728 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:48.564461   59728 main.go:141] libmachine: Using API Version  1
	I1101 00:35:48.564479   59728 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:48.564540   59728 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:48.565144   59728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:48.565166   59728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:48.565410   59728 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:48.565931   59728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:48.565957   59728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:48.571158   59728 main.go:141] libmachine: Using API Version  1
	I1101 00:35:48.571205   59728 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:48.571652   59728 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:48.571822   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .DriverName
	I1101 00:35:48.571992   59728 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:35:48.572019   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:48.576163   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:48.576197   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:48.576222   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:48.576294   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:48.576443   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:48.576552   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:48.576760   59728 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/old-k8s-version-993392/id_rsa Username:docker}
	I1101 00:35:48.579461   59728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I1101 00:35:48.579975   59728 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:48.580466   59728 main.go:141] libmachine: Using API Version  1
	I1101 00:35:48.580494   59728 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:48.580840   59728 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:48.581020   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetState
	I1101 00:35:48.582602   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .DriverName
	I1101 00:35:48.584706   59728 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:35:48.586611   59728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40901
	I1101 00:35:48.587768   59728 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:35:48.587784   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 00:35:48.587804   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:48.586635   59728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I1101 00:35:48.588756   59728 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:48.588826   59728 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:48.589352   59728 main.go:141] libmachine: Using API Version  1
	I1101 00:35:48.589370   59728 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:48.589516   59728 main.go:141] libmachine: Using API Version  1
	I1101 00:35:48.589540   59728 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:48.589755   59728 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:48.589978   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetState
	I1101 00:35:48.590952   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:48.591470   59728 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:48.591515   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:48.591606   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:48.591756   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:48.591831   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetState
	I1101 00:35:48.591993   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:48.592024   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .DriverName
	I1101 00:35:48.592128   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:48.592236   59728 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 00:35:48.592252   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 00:35:48.592268   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:48.592388   59728 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/old-k8s-version-993392/id_rsa Username:docker}
	I1101 00:35:48.593607   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .DriverName
	I1101 00:35:48.595564   59728 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1101 00:35:48.595978   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:48.596772   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:48.597171   59728 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 00:35:45.095796   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:45.096434   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:45.096467   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:45.096365   60335 retry.go:31] will retry after 2.401497699s: waiting for machine to come up
	I1101 00:35:47.499802   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:47.500423   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:47.500465   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:47.500336   60335 retry.go:31] will retry after 2.320302687s: waiting for machine to come up
	I1101 00:35:48.598826   59728 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 00:35:48.598845   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 00:35:48.597208   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:48.598867   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHHostname
	I1101 00:35:48.598884   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:48.597339   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:48.599094   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:48.599262   59728 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/old-k8s-version-993392/id_rsa Username:docker}
	I1101 00:35:48.605496   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:48.605886   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:ea:1c", ip: ""} in network mk-old-k8s-version-993392: {Iface:virbr1 ExpiryTime:2023-11-01 01:32:19 +0000 UTC Type:0 Mac:52:54:00:f4:ea:1c Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-993392 Clientid:01:52:54:00:f4:ea:1c}
	I1101 00:35:48.605906   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | domain old-k8s-version-993392 has defined IP address 192.168.39.70 and MAC address 52:54:00:f4:ea:1c in network mk-old-k8s-version-993392
	I1101 00:35:48.606342   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHPort
	I1101 00:35:48.606544   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHKeyPath
	I1101 00:35:48.606711   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .GetSSHUsername
	I1101 00:35:48.606844   59728 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/old-k8s-version-993392/id_rsa Username:docker}
	I1101 00:35:48.757121   59728 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 00:35:48.757143   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 00:35:48.825835   59728 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 00:35:48.825861   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 00:35:48.841045   59728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:35:48.857397   59728 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 00:35:48.857426   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 00:35:48.862807   59728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 00:35:48.949877   59728 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 00:35:48.949907   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 00:35:48.954632   59728 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 00:35:48.954661   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 00:35:49.003582   59728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 00:35:49.059215   59728 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 00:35:49.059241   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 00:35:49.092322   59728 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1101 00:35:49.092345   59728 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-993392" to be "Ready" ...
	I1101 00:35:49.092481   59728 docker.go:699] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:3.1
	
	-- /stdout --
	I1101 00:35:49.092503   59728 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:35:49.092512   59728 cache_images.go:262] succeeded pushing to: old-k8s-version-993392
	I1101 00:35:49.092519   59728 cache_images.go:263] failed pushing to: 
	I1101 00:35:49.092542   59728 main.go:141] libmachine: Making call to close driver server
	I1101 00:35:49.092563   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .Close
	I1101 00:35:49.092842   59728 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:35:49.092902   59728 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:35:49.092918   59728 main.go:141] libmachine: Making call to close driver server
	I1101 00:35:49.092928   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .Close
	I1101 00:35:49.092886   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | Closing plugin on server side
	I1101 00:35:49.093144   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | Closing plugin on server side
	I1101 00:35:49.093219   59728 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:35:49.093237   59728 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:35:49.095807   59728 node_ready.go:49] node "old-k8s-version-993392" has status "Ready":"True"
	I1101 00:35:49.095827   59728 node_ready.go:38] duration metric: took 3.458959ms waiting for node "old-k8s-version-993392" to be "Ready" ...
	I1101 00:35:49.095839   59728 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:35:49.105767   59728 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:49.127752   59728 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 00:35:49.127778   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 00:35:49.204697   59728 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 00:35:49.204719   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 00:35:49.271677   59728 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 00:35:49.271700   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 00:35:49.317200   59728 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 00:35:49.317222   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 00:35:49.394641   59728 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 00:35:49.394666   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 00:35:49.488336   59728 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 00:35:49.488364   59728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 00:35:49.507511   59728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 00:35:49.604555   59728 main.go:141] libmachine: Making call to close driver server
	I1101 00:35:49.604584   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .Close
	I1101 00:35:49.604585   59728 main.go:141] libmachine: Making call to close driver server
	I1101 00:35:49.604601   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .Close
	I1101 00:35:49.604844   59728 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:35:49.604896   59728 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:35:49.604911   59728 main.go:141] libmachine: Making call to close driver server
	I1101 00:35:49.604920   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .Close
	I1101 00:35:49.605027   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | Closing plugin on server side
	I1101 00:35:49.605046   59728 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:35:49.605070   59728 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:35:49.605081   59728 main.go:141] libmachine: Making call to close driver server
	I1101 00:35:49.605095   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .Close
	I1101 00:35:49.605150   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | Closing plugin on server side
	I1101 00:35:49.605223   59728 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:35:49.605237   59728 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:35:49.605320   59728 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:35:49.605340   59728 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:35:49.605345   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | Closing plugin on server side
	I1101 00:35:49.615278   59728 main.go:141] libmachine: Making call to close driver server
	I1101 00:35:49.615299   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .Close
	I1101 00:35:49.615598   59728 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:35:49.615616   59728 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:35:49.716039   59728 main.go:141] libmachine: Making call to close driver server
	I1101 00:35:49.716069   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .Close
	I1101 00:35:49.716337   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | Closing plugin on server side
	I1101 00:35:49.716352   59728 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:35:49.716368   59728 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:35:49.716385   59728 main.go:141] libmachine: Making call to close driver server
	I1101 00:35:49.716399   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .Close
	I1101 00:35:49.716653   59728 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:35:49.716670   59728 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:35:49.716686   59728 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-993392"
	I1101 00:35:49.986237   59728 main.go:141] libmachine: Making call to close driver server
	I1101 00:35:49.986263   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .Close
	I1101 00:35:49.986564   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | Closing plugin on server side
	I1101 00:35:49.986611   59728 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:35:49.986631   59728 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:35:49.986657   59728 main.go:141] libmachine: Making call to close driver server
	I1101 00:35:49.986668   59728 main.go:141] libmachine: (old-k8s-version-993392) Calling .Close
	I1101 00:35:49.986925   59728 main.go:141] libmachine: (old-k8s-version-993392) DBG | Closing plugin on server side
	I1101 00:35:49.986964   59728 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:35:49.986983   59728 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:35:49.989025   59728 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-993392 addons enable metrics-server	
	
	
	I1101 00:35:49.990729   59728 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1101 00:35:49.992478   59728 addons.go:502] enable addons completed in 1.499878452s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1101 00:35:48.506370   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:48.506451   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:48.535977   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:49.006310   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:49.006380   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:49.022221   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:49.505732   59907 api_server.go:166] Checking apiserver status ...
	I1101 00:35:49.505851   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:35:49.518186   59907 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:35:49.980057   59907 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 00:35:49.980129   59907 kubeadm.go:1128] stopping kube-system containers ...
	I1101 00:35:49.980193   59907 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 00:35:50.002804   59907 docker.go:470] Stopping containers: [567006dd918c 45553c53cfad a73caef6452d 4fd4558df44e 881035bb060d 70104620b5bd 73fc374842e5 21cc6bf16ff6 0fec034eb4d8 e4dca42a3888 d301b26535ee b2a91ebe5515 2cee25941486 41a749d5cd3d f466f9b88efe]
	I1101 00:35:50.002882   59907 ssh_runner.go:195] Run: docker stop 567006dd918c 45553c53cfad a73caef6452d 4fd4558df44e 881035bb060d 70104620b5bd 73fc374842e5 21cc6bf16ff6 0fec034eb4d8 e4dca42a3888 d301b26535ee b2a91ebe5515 2cee25941486 41a749d5cd3d f466f9b88efe
	I1101 00:35:50.023277   59907 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 00:35:50.038483   59907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 00:35:50.047637   59907 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:35:50.047725   59907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 00:35:50.056198   59907 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 00:35:50.056219   59907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:35:50.165877   59907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:35:50.939400   59907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:35:51.135864   59907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:35:51.243570   59907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:35:51.331397   59907 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:35:51.331462   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:35:51.345328   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:35:51.869858   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:35:52.369626   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:35:52.869546   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:35:53.369786   59907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:35:53.389222   59907 api_server.go:72] duration metric: took 2.057824638s to wait for apiserver process to appear ...
	I1101 00:35:53.389256   59907 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:35:53.389274   59907 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I1101 00:35:49.823550   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:49.824077   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | unable to find current IP address of domain default-k8s-diff-port-195256 in network mk-default-k8s-diff-port-195256
	I1101 00:35:49.824110   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | I1101 00:35:49.824013   60335 retry.go:31] will retry after 4.231470369s: waiting for machine to come up
	I1101 00:35:54.059607   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.060108   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Found IP for machine: 192.168.72.142
	I1101 00:35:54.060133   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Reserving static IP address...
	I1101 00:35:54.060166   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has current primary IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.060647   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-195256", mac: "52:54:00:ff:f6:1c", ip: "192.168.72.142"} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:54.060677   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | skip adding static IP to network mk-default-k8s-diff-port-195256 - found existing host DHCP lease matching {name: "default-k8s-diff-port-195256", mac: "52:54:00:ff:f6:1c", ip: "192.168.72.142"}
	I1101 00:35:54.060694   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Reserved static IP address: 192.168.72.142
	I1101 00:35:54.060711   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Waiting for SSH to be available...
	I1101 00:35:54.060735   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | Getting to WaitForSSH function...
	I1101 00:35:54.063170   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.063587   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:54.063631   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.063798   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | Using SSH client type: external
	I1101 00:35:54.063835   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/default-k8s-diff-port-195256/id_rsa (-rw-------)
	I1101 00:35:54.063880   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7251/.minikube/machines/default-k8s-diff-port-195256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 00:35:54.063897   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | About to run SSH command:
	I1101 00:35:54.063910   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | exit 0
	I1101 00:35:54.166189   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | SSH cmd err, output: <nil>: 
	I1101 00:35:54.166638   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetConfigRaw
	I1101 00:35:54.167284   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetIP
	I1101 00:35:54.169844   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.170165   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:54.170197   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.170443   60028 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/default-k8s-diff-port-195256/config.json ...
	I1101 00:35:54.170729   60028 machine.go:88] provisioning docker machine ...
	I1101 00:35:54.170750   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:35:54.170958   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetMachineName
	I1101 00:35:54.171136   60028 buildroot.go:166] provisioning hostname "default-k8s-diff-port-195256"
	I1101 00:35:54.171153   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetMachineName
	I1101 00:35:54.171290   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:35:54.173506   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.173851   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:54.173888   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.174018   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:35:54.174187   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:54.174368   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:54.174513   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:35:54.174714   60028 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:54.175207   60028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I1101 00:35:54.175232   60028 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-195256 && echo "default-k8s-diff-port-195256" | sudo tee /etc/hostname
	I1101 00:35:54.329236   60028 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-195256
	
	I1101 00:35:54.329269   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:35:54.332257   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.332603   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:54.332642   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.332822   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:35:54.333026   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:54.333191   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:54.333352   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:35:54.333521   60028 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:54.334002   60028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I1101 00:35:54.334034   60028 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-195256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-195256/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-195256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:35:54.476897   60028 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:35:54.476928   60028 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7251/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7251/.minikube}
	I1101 00:35:54.476971   60028 buildroot.go:174] setting up certificates
	I1101 00:35:54.476982   60028 provision.go:83] configureAuth start
	I1101 00:35:54.476996   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetMachineName
	I1101 00:35:54.477315   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetIP
	I1101 00:35:54.480251   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.480659   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:54.480693   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.480893   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:35:54.483325   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.483629   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:54.483672   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.483845   60028 provision.go:138] copyHostCerts
	I1101 00:35:54.483911   60028 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem, removing ...
	I1101 00:35:54.483923   60028 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
	I1101 00:35:54.483980   60028 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem (1675 bytes)
	I1101 00:35:54.484069   60028 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem, removing ...
	I1101 00:35:54.484080   60028 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
	I1101 00:35:54.484103   60028 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem (1082 bytes)
	I1101 00:35:54.484163   60028 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem, removing ...
	I1101 00:35:54.484169   60028 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
	I1101 00:35:54.484187   60028 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem (1123 bytes)
	I1101 00:35:54.484240   60028 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-195256 san=[192.168.72.142 192.168.72.142 localhost 127.0.0.1 minikube default-k8s-diff-port-195256]
	I1101 00:35:51.188746   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:35:53.686224   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:35:56.279389   60145 start.go:369] acquired machines lock for "embed-certs-503881" in 46.935407706s
	I1101 00:35:56.279433   60145 start.go:96] Skipping create...Using existing machine configuration
	I1101 00:35:56.279444   60145 fix.go:54] fixHost starting: 
	I1101 00:35:56.279882   60145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:56.279935   60145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:56.300108   60145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I1101 00:35:56.301256   60145 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:56.301982   60145 main.go:141] libmachine: Using API Version  1
	I1101 00:35:56.302011   60145 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:56.302403   60145 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:56.302645   60145 main.go:141] libmachine: (embed-certs-503881) Calling .DriverName
	I1101 00:35:56.302820   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetState
	I1101 00:35:56.304575   60145 fix.go:102] recreateIfNeeded on embed-certs-503881: state=Stopped err=<nil>
	I1101 00:35:56.304605   60145 main.go:141] libmachine: (embed-certs-503881) Calling .DriverName
	W1101 00:35:56.304778   60145 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 00:35:56.306890   60145 out.go:177] * Restarting existing kvm2 VM for "embed-certs-503881" ...
	I1101 00:35:56.619886   59907 api_server.go:279] https://192.168.50.197:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 00:35:56.619919   59907 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 00:35:56.619933   59907 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I1101 00:35:56.733052   59907 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 00:35:56.733097   59907 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 00:35:57.233831   59907 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I1101 00:35:57.241095   59907 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 00:35:57.241129   59907 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 00:35:57.733184   59907 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I1101 00:35:57.754522   59907 api_server.go:279] https://192.168.50.197:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 00:35:57.754559   59907 api_server.go:103] status: https://192.168.50.197:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 00:35:58.234124   59907 api_server.go:253] Checking apiserver healthz at https://192.168.50.197:8443/healthz ...
	I1101 00:35:58.239490   59907 api_server.go:279] https://192.168.50.197:8443/healthz returned 200:
	ok
	I1101 00:35:58.248595   59907 api_server.go:141] control plane version: v1.28.3
	I1101 00:35:58.248625   59907 api_server.go:131] duration metric: took 4.859362423s to wait for apiserver health ...
	I1101 00:35:58.248633   59907 cni.go:84] Creating CNI manager for ""
	I1101 00:35:58.248651   59907 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 00:35:58.250630   59907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 00:35:58.252265   59907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 00:35:58.266150   59907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 00:35:58.290615   59907 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:35:58.307877   59907 system_pods.go:59] 8 kube-system pods found
	I1101 00:35:58.307916   59907 system_pods.go:61] "coredns-5dd5756b68-lxp8r" [9ae8e5ef-82e2-40eb-9581-a2bc1bfb408a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 00:35:58.307933   59907 system_pods.go:61] "etcd-no-preload-658664" [37b740a8-10a1-4ac6-96c8-dc69db6bf670] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 00:35:58.307943   59907 system_pods.go:61] "kube-apiserver-no-preload-658664" [46d90280-14f6-4a8f-8736-bea2770b8ca9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 00:35:58.307952   59907 system_pods.go:61] "kube-controller-manager-no-preload-658664" [e00add62-ab19-406a-8ee3-484f1ff38ed8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 00:35:58.307966   59907 system_pods.go:61] "kube-proxy-sl6wg" [9b58fba5-ef28-4425-a7c5-3f089a0d71e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 00:35:58.307983   59907 system_pods.go:61] "kube-scheduler-no-preload-658664" [28f71b6c-8bec-49fb-ad05-4d013f203386] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 00:35:58.307993   59907 system_pods.go:61] "metrics-server-57f55c9bc5-25jvq" [796a57ed-af02-48d6-904a-9f1966a886c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 00:35:58.308005   59907 system_pods.go:61] "storage-provisioner" [4cc66d3e-9093-4c9b-930b-5785b7a532c8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 00:35:58.308017   59907 system_pods.go:74] duration metric: took 17.379231ms to wait for pod list to return data ...
	I1101 00:35:58.308029   59907 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:35:58.312105   59907 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:35:58.312135   59907 node_conditions.go:123] node cpu capacity is 2
	I1101 00:35:58.312149   59907 node_conditions.go:105] duration metric: took 4.111746ms to run NodePressure ...
	I1101 00:35:58.312174   59907 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:35:56.308619   60145 main.go:141] libmachine: (embed-certs-503881) Calling .Start
	I1101 00:35:56.308821   60145 main.go:141] libmachine: (embed-certs-503881) Ensuring networks are active...
	I1101 00:35:56.309680   60145 main.go:141] libmachine: (embed-certs-503881) Ensuring network default is active
	I1101 00:35:56.310167   60145 main.go:141] libmachine: (embed-certs-503881) Ensuring network mk-embed-certs-503881 is active
	I1101 00:35:56.310683   60145 main.go:141] libmachine: (embed-certs-503881) Getting domain xml...
	I1101 00:35:56.311437   60145 main.go:141] libmachine: (embed-certs-503881) Creating domain...
	I1101 00:35:57.737000   60145 main.go:141] libmachine: (embed-certs-503881) Waiting to get IP...
	I1101 00:35:57.738090   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:35:57.738715   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:35:57.738768   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:35:57.738700   60524 retry.go:31] will retry after 207.495601ms: waiting for machine to come up
	I1101 00:35:57.948511   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:35:57.949182   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:35:57.949217   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:35:57.949123   60524 retry.go:31] will retry after 327.332454ms: waiting for machine to come up
	I1101 00:35:58.277862   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:35:58.278458   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:35:58.278494   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:35:58.278400   60524 retry.go:31] will retry after 442.947949ms: waiting for machine to come up
	I1101 00:35:58.723237   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:35:58.723970   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:35:58.724005   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:35:58.723919   60524 retry.go:31] will retry after 515.973577ms: waiting for machine to come up
	I1101 00:35:54.638289   60028 provision.go:172] copyRemoteCerts
	I1101 00:35:54.638351   60028 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:35:54.638373   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:35:54.641357   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.641666   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:54.641704   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.641894   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:35:54.642079   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:54.642290   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:35:54.642443   60028 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/default-k8s-diff-port-195256/id_rsa Username:docker}
	I1101 00:35:54.746476   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1101 00:35:54.769465   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:35:54.791490   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 00:35:54.813093   60028 provision.go:86] duration metric: configureAuth took 336.099926ms
	I1101 00:35:54.813117   60028 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:35:54.813366   60028 config.go:182] Loaded profile config "default-k8s-diff-port-195256": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:35:54.813397   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:35:54.813667   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:35:54.816429   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.816892   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:54.816934   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.817057   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:35:54.817236   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:54.817369   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:54.817488   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:35:54.817684   60028 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:54.818059   60028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I1101 00:35:54.818073   60028 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 00:35:54.952622   60028 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1101 00:35:54.952648   60028 buildroot.go:70] root file system type: tmpfs
	I1101 00:35:54.952785   60028 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 00:35:54.952814   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:35:54.955507   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.955856   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:54.955896   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:54.956064   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:35:54.956221   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:54.956408   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:54.956531   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:35:54.956740   60028 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:54.957099   60028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I1101 00:35:54.957160   60028 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 00:35:55.104857   60028 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 00:35:55.104901   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:35:55.108048   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:55.108450   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:55.108492   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:55.108705   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:35:55.108881   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:55.109016   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:55.109183   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:35:55.109407   60028 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:55.109734   60028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I1101 00:35:55.109760   60028 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 00:35:56.010480   60028 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1101 00:35:56.010529   60028 machine.go:91] provisioned docker machine in 1.839782751s
	I1101 00:35:56.010542   60028 start.go:300] post-start starting for "default-k8s-diff-port-195256" (driver="kvm2")
	I1101 00:35:56.010558   60028 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:35:56.010576   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:35:56.010925   60028 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:35:56.010958   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:35:56.013751   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:56.014111   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:56.014141   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:56.014270   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:35:56.014436   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:56.014624   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:35:56.014770   60028 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/default-k8s-diff-port-195256/id_rsa Username:docker}
	I1101 00:35:56.107863   60028 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:35:56.111886   60028 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:35:56.111913   60028 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/addons for local assets ...
	I1101 00:35:56.111972   60028 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/files for local assets ...
	I1101 00:35:56.112045   60028 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> 144632.pem in /etc/ssl/certs
	I1101 00:35:56.112126   60028 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:35:56.120363   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:35:56.143311   60028 start.go:303] post-start completed in 132.755911ms
	I1101 00:35:56.143335   60028 fix.go:56] fixHost completed within 20.323274241s
	I1101 00:35:56.143378   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:35:56.145858   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:56.146296   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:56.146333   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:56.146527   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:35:56.146758   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:56.146948   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:56.147134   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:35:56.147328   60028 main.go:141] libmachine: Using SSH client type: native
	I1101 00:35:56.147813   60028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I1101 00:35:56.147832   60028 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:35:56.279246   60028 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698798956.227988133
	
	I1101 00:35:56.279272   60028 fix.go:206] guest clock: 1698798956.227988133
	I1101 00:35:56.279283   60028 fix.go:219] Guest: 2023-11-01 00:35:56.227988133 +0000 UTC Remote: 2023-11-01 00:35:56.143339156 +0000 UTC m=+51.605495528 (delta=84.648977ms)
	I1101 00:35:56.279309   60028 fix.go:190] guest clock delta is within tolerance: 84.648977ms
	I1101 00:35:56.279315   60028 start.go:83] releasing machines lock for "default-k8s-diff-port-195256", held for 20.459295249s
	I1101 00:35:56.280138   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:35:56.280453   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetIP
	I1101 00:35:56.283686   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:56.284201   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:56.284241   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:56.284430   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:35:56.285047   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:35:56.285232   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:35:56.285359   60028 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:35:56.285408   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:35:56.285431   60028 ssh_runner.go:195] Run: cat /version.json
	I1101 00:35:56.285474   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:35:56.288538   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:56.288828   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:56.289019   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:56.289059   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:56.289250   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:35:56.289366   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:56.289398   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:56.289404   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:56.289586   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:35:56.289747   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:35:56.289758   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:35:56.289922   60028 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/default-k8s-diff-port-195256/id_rsa Username:docker}
	I1101 00:35:56.290481   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:35:56.290679   60028 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/default-k8s-diff-port-195256/id_rsa Username:docker}
	I1101 00:35:56.416819   60028 ssh_runner.go:195] Run: systemctl --version
	I1101 00:35:56.424339   60028 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 00:35:56.431111   60028 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:35:56.431186   60028 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:35:56.451943   60028 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 00:35:56.451978   60028 start.go:472] detecting cgroup driver to use...
	I1101 00:35:56.452128   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:35:56.472503   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1101 00:35:56.486186   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1101 00:35:56.499478   60028 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1101 00:35:56.499546   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1101 00:35:56.512776   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:35:56.525461   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1101 00:35:56.537771   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:35:56.547889   60028 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:35:56.558192   60028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1101 00:35:56.570739   60028 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:35:56.582540   60028 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:35:56.593974   60028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:35:56.739988   60028 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 00:35:56.759504   60028 start.go:472] detecting cgroup driver to use...
	I1101 00:35:56.759588   60028 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 00:35:56.786653   60028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:35:56.808898   60028 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:35:56.832565   60028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:35:56.848621   60028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:35:56.864825   60028 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 00:35:56.900911   60028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:35:56.913619   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:35:56.933485   60028 ssh_runner.go:195] Run: which cri-dockerd
	I1101 00:35:56.938073   60028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1101 00:35:56.947427   60028 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1101 00:35:56.966020   60028 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 00:35:57.098731   60028 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 00:35:57.212215   60028 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1101 00:35:57.212383   60028 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1101 00:35:57.232814   60028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:35:57.340063   60028 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 00:35:58.850589   60028 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.510470427s)
	I1101 00:35:58.850671   60028 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:35:58.976848   60028 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1101 00:35:59.124876   60028 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:35:59.253807   60028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:35:59.384737   60028 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1101 00:35:59.403124   60028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:35:59.532649   60028 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1101 00:35:59.630797   60028 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1101 00:35:59.630882   60028 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1101 00:35:59.637848   60028 start.go:540] Will wait 60s for crictl version
	I1101 00:35:59.637926   60028 ssh_runner.go:195] Run: which crictl
	I1101 00:35:59.642619   60028 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:35:59.702258   60028 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1101 00:35:59.702324   60028 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 00:35:59.732197   60028 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 00:35:58.680376   59907 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 00:35:58.695586   59907 kubeadm.go:787] kubelet initialised
	I1101 00:35:58.695677   59907 kubeadm.go:788] duration metric: took 15.202779ms waiting for restarted kubelet to initialise ...
	I1101 00:35:58.695717   59907 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:35:58.717413   59907 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-lxp8r" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:58.727488   59907 pod_ready.go:97] node "no-preload-658664" hosting pod "coredns-5dd5756b68-lxp8r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:58.727518   59907 pod_ready.go:81] duration metric: took 10.072565ms waiting for pod "coredns-5dd5756b68-lxp8r" in "kube-system" namespace to be "Ready" ...
	E1101 00:35:58.727530   59907 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-658664" hosting pod "coredns-5dd5756b68-lxp8r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:58.727547   59907 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:58.737936   59907 pod_ready.go:97] node "no-preload-658664" hosting pod "etcd-no-preload-658664" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:58.737971   59907 pod_ready.go:81] duration metric: took 10.411251ms waiting for pod "etcd-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	E1101 00:35:58.737982   59907 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-658664" hosting pod "etcd-no-preload-658664" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:58.737990   59907 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:58.749198   59907 pod_ready.go:97] node "no-preload-658664" hosting pod "kube-apiserver-no-preload-658664" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:58.749292   59907 pod_ready.go:81] duration metric: took 11.288104ms waiting for pod "kube-apiserver-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	E1101 00:35:58.749311   59907 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-658664" hosting pod "kube-apiserver-no-preload-658664" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:58.749324   59907 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:58.758440   59907 pod_ready.go:97] node "no-preload-658664" hosting pod "kube-controller-manager-no-preload-658664" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:58.758465   59907 pod_ready.go:81] duration metric: took 9.126694ms waiting for pod "kube-controller-manager-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	E1101 00:35:58.758475   59907 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-658664" hosting pod "kube-controller-manager-no-preload-658664" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:58.758482   59907 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sl6wg" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:59.095313   59907 pod_ready.go:97] node "no-preload-658664" hosting pod "kube-proxy-sl6wg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:59.095349   59907 pod_ready.go:81] duration metric: took 336.858601ms waiting for pod "kube-proxy-sl6wg" in "kube-system" namespace to be "Ready" ...
	E1101 00:35:59.095362   59907 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-658664" hosting pod "kube-proxy-sl6wg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:59.095370   59907 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:59.502432   59907 pod_ready.go:97] node "no-preload-658664" hosting pod "kube-scheduler-no-preload-658664" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:59.502468   59907 pod_ready.go:81] duration metric: took 407.088619ms waiting for pod "kube-scheduler-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	E1101 00:35:59.502480   59907 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-658664" hosting pod "kube-scheduler-no-preload-658664" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:59.502490   59907 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace to be "Ready" ...
	I1101 00:35:59.895386   59907 pod_ready.go:97] node "no-preload-658664" hosting pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:59.895426   59907 pod_ready.go:81] duration metric: took 392.926697ms waiting for pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace to be "Ready" ...
	E1101 00:35:59.895439   59907 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-658664" hosting pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658664" has status "Ready":"False"
	I1101 00:35:59.895451   59907 pod_ready.go:38] duration metric: took 1.199695277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:35:59.895472   59907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 00:35:59.909307   59907 ops.go:34] apiserver oom_adj: -16
	I1101 00:35:59.909334   59907 kubeadm.go:640] restartCluster took 19.950903136s
	I1101 00:35:59.909343   59907 kubeadm.go:406] StartCluster complete in 19.981119691s
	I1101 00:35:59.909363   59907 settings.go:142] acquiring lock: {Name:mk57c659cffa0c6a1b184e5906c662f85ff8a099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:35:59.909444   59907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:35:59.910754   59907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:35:59.911030   59907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 00:35:59.911140   59907 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 00:35:59.911236   59907 addons.go:69] Setting storage-provisioner=true in profile "no-preload-658664"
	I1101 00:35:59.911256   59907 addons.go:231] Setting addon storage-provisioner=true in "no-preload-658664"
	I1101 00:35:59.911255   59907 addons.go:69] Setting default-storageclass=true in profile "no-preload-658664"
	W1101 00:35:59.911264   59907 addons.go:240] addon storage-provisioner should already be in state true
	I1101 00:35:59.911262   59907 config.go:182] Loaded profile config "no-preload-658664": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:35:59.911275   59907 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-658664"
	I1101 00:35:59.911310   59907 host.go:66] Checking if "no-preload-658664" exists ...
	I1101 00:35:59.911334   59907 cache.go:107] acquiring lock: {Name:mkc5ed527821f669fe42d90dc96f9db56fa3565a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:35:59.911427   59907 cache.go:115] /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1101 00:35:59.911438   59907 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 108.688µs
	I1101 00:35:59.911447   59907 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1101 00:35:59.911454   59907 cache.go:87] Successfully saved all images to host disk.
	I1101 00:35:59.911629   59907 config.go:182] Loaded profile config "no-preload-658664": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:35:59.911712   59907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:59.911743   59907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:59.911789   59907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:59.911822   59907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:59.911878   59907 addons.go:69] Setting dashboard=true in profile "no-preload-658664"
	I1101 00:35:59.911906   59907 addons.go:231] Setting addon dashboard=true in "no-preload-658664"
	W1101 00:35:59.911920   59907 addons.go:240] addon dashboard should already be in state true
	I1101 00:35:59.911937   59907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:59.911960   59907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:59.912024   59907 addons.go:69] Setting metrics-server=true in profile "no-preload-658664"
	I1101 00:35:59.912078   59907 addons.go:231] Setting addon metrics-server=true in "no-preload-658664"
	W1101 00:35:59.912096   59907 addons.go:240] addon metrics-server should already be in state true
	I1101 00:35:59.912178   59907 host.go:66] Checking if "no-preload-658664" exists ...
	I1101 00:35:59.912184   59907 host.go:66] Checking if "no-preload-658664" exists ...
	I1101 00:35:59.912567   59907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:59.912644   59907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:59.912793   59907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:59.912860   59907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:59.922881   59907 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-658664" context rescaled to 1 replicas
	I1101 00:35:59.922922   59907 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.197 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 00:35:59.925194   59907 out.go:177] * Verifying Kubernetes components...
	I1101 00:35:59.927192   59907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:35:59.930780   59907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I1101 00:35:59.931324   59907 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:59.931815   59907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38909
	I1101 00:35:59.931899   59907 main.go:141] libmachine: Using API Version  1
	I1101 00:35:59.931918   59907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:59.931928   59907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I1101 00:35:59.932433   59907 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:59.932440   59907 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:59.932505   59907 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:59.932921   59907 main.go:141] libmachine: Using API Version  1
	I1101 00:35:59.932937   59907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:59.933001   59907 main.go:141] libmachine: Using API Version  1
	I1101 00:35:59.933016   59907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:59.933187   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetState
	I1101 00:35:59.933817   59907 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:59.933823   59907 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:59.934338   59907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:59.934372   59907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:59.934977   59907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:59.935075   59907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:59.935524   59907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:59.935569   59907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:59.937130   59907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37901
	I1101 00:35:59.937622   59907 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:59.938082   59907 main.go:141] libmachine: Using API Version  1
	I1101 00:35:59.938111   59907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:59.938534   59907 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:59.939127   59907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:59.939162   59907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:59.951901   59907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I1101 00:35:59.952505   59907 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:59.954166   59907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38057
	I1101 00:35:59.954285   59907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45901
	I1101 00:35:59.954471   59907 main.go:141] libmachine: Using API Version  1
	I1101 00:35:59.954486   59907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:59.954907   59907 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:59.954969   59907 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:59.955084   59907 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:59.955401   59907 main.go:141] libmachine: Using API Version  1
	I1101 00:35:59.955419   59907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:59.955631   59907 main.go:141] libmachine: Using API Version  1
	I1101 00:35:59.955652   59907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:59.955651   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetState
	I1101 00:35:59.955886   59907 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:59.956030   59907 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:59.956099   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetState
	I1101 00:35:59.956853   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:35:59.957125   59907 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:35:59.957147   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:59.958350   59907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41677
	I1101 00:35:59.958888   59907 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:59.959107   59907 addons.go:231] Setting addon default-storageclass=true in "no-preload-658664"
	W1101 00:35:59.959125   59907 addons.go:240] addon default-storageclass should already be in state true
	I1101 00:35:59.959165   59907 host.go:66] Checking if "no-preload-658664" exists ...
	I1101 00:35:59.959407   59907 main.go:141] libmachine: Using API Version  1
	I1101 00:35:59.959422   59907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:59.959539   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:35:59.961770   59907 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 00:35:59.959850   59907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:35:59.959886   59907 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:59.962097   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:59.962677   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:59.964756   59907 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1101 00:35:59.963258   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:59.963293   59907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:35:59.963491   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetState
	I1101 00:35:59.963528   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:59.966263   59907 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 00:35:59.966282   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 00:35:59.966302   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:59.966353   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:59.967536   59907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36393
	I1101 00:35:59.967686   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:59.967920   59907 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/no-preload-658664/id_rsa Username:docker}
	I1101 00:35:59.968881   59907 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:35:59.969646   59907 main.go:141] libmachine: Using API Version  1
	I1101 00:35:59.969662   59907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:35:59.969984   59907 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:35:59.970292   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:59.970313   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetState
	I1101 00:35:59.970729   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:35:59.970849   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:59.970879   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:59.972666   59907 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 00:35:59.971232   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:59.972236   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:35:59.974249   59907 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 00:35:59.974266   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 00:35:59.974282   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:59.976420   59907 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:35:55.686907   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:35:58.187749   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:35:59.973006   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:59.977776   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:59.978035   59907 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:35:59.978052   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 00:35:59.978069   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:35:59.978285   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:59.978330   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:59.978343   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:59.978449   59907 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/no-preload-658664/id_rsa Username:docker}
	I1101 00:35:59.978619   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:59.978824   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:59.979037   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:59.979201   59907 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/no-preload-658664/id_rsa Username:docker}
	I1101 00:35:59.981467   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:59.981960   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:35:59.981987   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:35:59.982094   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:35:59.982257   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:35:59.982413   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:35:59.982580   59907 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/no-preload-658664/id_rsa Username:docker}
	I1101 00:36:00.002225   59907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43757
	I1101 00:36:00.002752   59907 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:00.003220   59907 main.go:141] libmachine: Using API Version  1
	I1101 00:36:00.003237   59907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:00.003638   59907 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:00.004185   59907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:36:00.004230   59907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:36:00.057135   59907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39697
	I1101 00:36:00.057750   59907 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:00.058464   59907 main.go:141] libmachine: Using API Version  1
	I1101 00:36:00.058493   59907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:00.058831   59907 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:00.059122   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetState
	I1101 00:36:00.061091   59907 main.go:141] libmachine: (no-preload-658664) Calling .DriverName
	I1101 00:36:00.061456   59907 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 00:36:00.061476   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 00:36:00.061495   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHHostname
	I1101 00:36:00.065843   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:36:00.066220   59907 main.go:141] libmachine: (no-preload-658664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:37:ac", ip: ""} in network mk-no-preload-658664: {Iface:virbr3 ExpiryTime:2023-11-01 01:32:48 +0000 UTC Type:0 Mac:52:54:00:9b:37:ac Iaid: IPaddr:192.168.50.197 Prefix:24 Hostname:no-preload-658664 Clientid:01:52:54:00:9b:37:ac}
	I1101 00:36:00.066246   59907 main.go:141] libmachine: (no-preload-658664) DBG | domain no-preload-658664 has defined IP address 192.168.50.197 and MAC address 52:54:00:9b:37:ac in network mk-no-preload-658664
	I1101 00:36:00.066439   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHPort
	I1101 00:36:00.066673   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHKeyPath
	I1101 00:36:00.066823   59907 main.go:141] libmachine: (no-preload-658664) Calling .GetSSHUsername
	I1101 00:36:00.066956   59907 sshutil.go:53] new ssh client: &{IP:192.168.50.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/no-preload-658664/id_rsa Username:docker}
	I1101 00:36:00.101810   59907 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 00:36:00.101895   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 00:36:00.159307   59907 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 00:36:00.159332   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 00:36:00.170952   59907 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 00:36:00.170971   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 00:36:00.172045   59907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:36:00.215774   59907 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 00:36:00.215805   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 00:36:00.236580   59907 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 00:36:00.236610   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 00:36:00.250080   59907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 00:36:00.294000   59907 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 00:36:00.294027   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 00:36:00.295764   59907 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 00:36:00.295839   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 00:36:00.339548   59907 node_ready.go:35] waiting up to 6m0s for node "no-preload-658664" to be "Ready" ...
	I1101 00:36:00.339693   59907 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1101 00:36:00.339707   59907 docker.go:699] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1101 00:36:00.339789   59907 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:36:00.339798   59907 cache_images.go:262] succeeded pushing to: no-preload-658664
	I1101 00:36:00.339803   59907 cache_images.go:263] failed pushing to: 
	I1101 00:36:00.339832   59907 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:00.339847   59907 main.go:141] libmachine: (no-preload-658664) Calling .Close
	I1101 00:36:00.340149   59907 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:00.340165   59907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:00.340176   59907 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:00.340190   59907 main.go:141] libmachine: (no-preload-658664) Calling .Close
	I1101 00:36:00.342224   59907 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:00.342239   59907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:00.351684   59907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 00:36:00.375162   59907 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 00:36:00.375210   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 00:36:00.433813   59907 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 00:36:00.433850   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 00:36:00.506678   59907 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 00:36:00.506703   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 00:36:00.596957   59907 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 00:36:00.596981   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 00:36:00.630753   59907 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 00:36:00.630776   59907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 00:36:00.697220   59907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 00:36:02.024576   59907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.77444533s)
	I1101 00:36:02.024636   59907 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:02.024649   59907 main.go:141] libmachine: (no-preload-658664) Calling .Close
	I1101 00:36:02.024689   59907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.672959578s)
	I1101 00:36:02.024732   59907 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:02.024746   59907 main.go:141] libmachine: (no-preload-658664) Calling .Close
	I1101 00:36:02.024750   59907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.85265011s)
	I1101 00:36:02.024777   59907 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:02.024799   59907 main.go:141] libmachine: (no-preload-658664) Calling .Close
	I1101 00:36:02.025194   59907 main.go:141] libmachine: (no-preload-658664) DBG | Closing plugin on server side
	I1101 00:36:02.025227   59907 main.go:141] libmachine: (no-preload-658664) DBG | Closing plugin on server side
	I1101 00:36:02.025228   59907 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:02.025244   59907 main.go:141] libmachine: (no-preload-658664) DBG | Closing plugin on server side
	I1101 00:36:02.025244   59907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:02.025255   59907 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:02.025264   59907 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:02.025272   59907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:02.025282   59907 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:02.025291   59907 main.go:141] libmachine: (no-preload-658664) Calling .Close
	I1101 00:36:02.025265   59907 main.go:141] libmachine: (no-preload-658664) Calling .Close
	I1101 00:36:02.025576   59907 main.go:141] libmachine: (no-preload-658664) DBG | Closing plugin on server side
	I1101 00:36:02.025599   59907 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:02.025610   59907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:02.026802   59907 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:02.026815   59907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:02.026825   59907 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:02.026833   59907 main.go:141] libmachine: (no-preload-658664) Calling .Close
	I1101 00:36:02.026865   59907 main.go:141] libmachine: (no-preload-658664) DBG | Closing plugin on server side
	I1101 00:36:02.026875   59907 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:02.026899   59907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:02.027139   59907 main.go:141] libmachine: (no-preload-658664) DBG | Closing plugin on server side
	I1101 00:36:02.028378   59907 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:02.028398   59907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:02.028408   59907 addons.go:467] Verifying addon metrics-server=true in "no-preload-658664"
	I1101 00:36:02.034701   59907 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:02.034717   59907 main.go:141] libmachine: (no-preload-658664) Calling .Close
	I1101 00:36:02.035010   59907 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:02.035029   59907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:02.354098   59907 node_ready.go:58] node "no-preload-658664" has status "Ready":"False"
	I1101 00:36:02.466741   59907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.769466734s)
	I1101 00:36:02.466795   59907 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:02.466808   59907 main.go:141] libmachine: (no-preload-658664) Calling .Close
	I1101 00:36:02.467209   59907 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:02.467230   59907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:02.467232   59907 main.go:141] libmachine: (no-preload-658664) DBG | Closing plugin on server side
	I1101 00:36:02.467241   59907 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:02.467252   59907 main.go:141] libmachine: (no-preload-658664) Calling .Close
	I1101 00:36:02.467504   59907 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:02.467517   59907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:02.469484   59907 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-658664 addons enable metrics-server	
	
	
	I1101 00:36:02.471160   59907 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1101 00:36:02.472652   59907 addons.go:502] enable addons completed in 2.561532634s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1101 00:35:59.241238   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:35:59.241901   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:35:59.241924   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:35:59.241859   60524 retry.go:31] will retry after 757.527538ms: waiting for machine to come up
	I1101 00:36:00.007082   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:00.007742   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:36:00.007770   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:36:00.007659   60524 retry.go:31] will retry after 594.226865ms: waiting for machine to come up
	I1101 00:36:00.603247   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:00.603883   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:36:00.603914   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:36:00.603835   60524 retry.go:31] will retry after 1.165115016s: waiting for machine to come up
	I1101 00:36:01.770190   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:01.770802   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:36:01.770832   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:36:01.770749   60524 retry.go:31] will retry after 959.580558ms: waiting for machine to come up
	I1101 00:36:02.731778   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:02.732344   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:36:02.732383   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:36:02.732269   60524 retry.go:31] will retry after 1.353164686s: waiting for machine to come up
	I1101 00:36:04.087793   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:04.088443   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:36:04.088477   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:36:04.088392   60524 retry.go:31] will retry after 1.787366098s: waiting for machine to come up
	I1101 00:35:59.765461   60028 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1101 00:35:59.765510   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetIP
	I1101 00:35:59.768493   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:59.768870   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:35:59.768905   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:35:59.769124   60028 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1101 00:35:59.773262   60028 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:35:59.785021   60028 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1101 00:35:59.785086   60028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:35:59.806562   60028 docker.go:699] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1101 00:35:59.806594   60028 docker.go:629] Images already preloaded, skipping extraction
	I1101 00:35:59.806656   60028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:35:59.829099   60028 docker.go:699] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1101 00:35:59.829132   60028 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:35:59.829203   60028 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 00:35:59.857866   60028 cni.go:84] Creating CNI manager for ""
	I1101 00:35:59.857895   60028 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 00:35:59.857914   60028 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:35:59.857946   60028 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.142 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-195256 NodeName:default-k8s-diff-port-195256 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:35:59.858143   60028 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.142
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-195256"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:35:59.858241   60028 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-195256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-195256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1101 00:35:59.858317   60028 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:35:59.868907   60028 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:35:59.868989   60028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:35:59.878842   60028 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I1101 00:35:59.895426   60028 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:35:59.924382   60028 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I1101 00:35:59.972835   60028 ssh_runner.go:195] Run: grep 192.168.72.142	control-plane.minikube.internal$ /etc/hosts
	I1101 00:35:59.980276   60028 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:36:00.005064   60028 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/default-k8s-diff-port-195256 for IP: 192.168.72.142
	I1101 00:36:00.005133   60028 certs.go:190] acquiring lock for shared ca certs: {Name:mkd78a553474b872bb63abf547b6fa0a317dc3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:36:00.005285   60028 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key
	I1101 00:36:00.005342   60028 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key
	I1101 00:36:00.005453   60028 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/default-k8s-diff-port-195256/client.key
	I1101 00:36:00.005542   60028 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/default-k8s-diff-port-195256/apiserver.key.f560a2b4
	I1101 00:36:00.005604   60028 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/default-k8s-diff-port-195256/proxy-client.key
	I1101 00:36:00.005742   60028 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem (1338 bytes)
	W1101 00:36:00.005782   60028 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463_empty.pem, impossibly tiny 0 bytes
	I1101 00:36:00.005801   60028 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:36:00.005834   60028 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem (1082 bytes)
	I1101 00:36:00.005875   60028 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:36:00.005914   60028 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem (1675 bytes)
	I1101 00:36:00.005987   60028 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:36:00.006823   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/default-k8s-diff-port-195256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:36:00.038975   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/default-k8s-diff-port-195256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 00:36:00.068509   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/default-k8s-diff-port-195256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:36:00.103311   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/default-k8s-diff-port-195256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 00:36:00.131669   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:36:00.161839   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 00:36:00.193080   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:36:00.224667   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:36:00.252679   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /usr/share/ca-certificates/144632.pem (1708 bytes)
	I1101 00:36:00.279916   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:36:00.306466   60028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem --> /usr/share/ca-certificates/14463.pem (1338 bytes)
	I1101 00:36:00.331713   60028 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:36:00.352366   60028 ssh_runner.go:195] Run: openssl version
	I1101 00:36:00.359760   60028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144632.pem && ln -fs /usr/share/ca-certificates/144632.pem /etc/ssl/certs/144632.pem"
	I1101 00:36:00.372553   60028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144632.pem
	I1101 00:36:00.378088   60028 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:48 /usr/share/ca-certificates/144632.pem
	I1101 00:36:00.378164   60028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144632.pem
	I1101 00:36:00.385380   60028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144632.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:36:00.398653   60028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:36:00.412451   60028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:36:00.418990   60028 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:36:00.419066   60028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:36:00.426338   60028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:36:00.438698   60028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14463.pem && ln -fs /usr/share/ca-certificates/14463.pem /etc/ssl/certs/14463.pem"
	I1101 00:36:00.452658   60028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14463.pem
	I1101 00:36:00.457414   60028 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:48 /usr/share/ca-certificates/14463.pem
	I1101 00:36:00.457484   60028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14463.pem
	I1101 00:36:00.463935   60028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14463.pem /etc/ssl/certs/51391683.0"
	I1101 00:36:00.477548   60028 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:36:00.483235   60028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 00:36:00.490549   60028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 00:36:00.497814   60028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 00:36:00.504419   60028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 00:36:00.512996   60028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 00:36:00.520931   60028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 00:36:00.526926   60028 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-195256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-195256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.142 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts
:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:36:00.527078   60028 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 00:36:00.546073   60028 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 00:36:00.555758   60028 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 00:36:00.555780   60028 kubeadm.go:636] restartCluster start
	I1101 00:36:00.555857   60028 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 00:36:00.567662   60028 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:00.568630   60028 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-195256" does not appear in /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:36:00.569011   60028 kubeconfig.go:146] "default-k8s-diff-port-195256" context is missing from /home/jenkins/minikube-integration/17486-7251/kubeconfig - will repair!
	I1101 00:36:00.569621   60028 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:36:00.571104   60028 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 00:36:00.583088   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:00.583153   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:00.595048   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:00.595074   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:00.595193   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:00.607474   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:01.108247   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:01.108346   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:01.124600   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:01.607674   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:01.607762   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:01.619981   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:02.108614   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:02.108705   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:02.124248   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:02.607665   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:02.607759   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:02.619986   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:03.108567   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:03.108667   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:03.120867   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:03.607910   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:03.608022   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:03.621333   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:04.107762   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:04.107863   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:04.122254   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:00.686307   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:03.185007   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:04.851479   59907 node_ready.go:58] node "no-preload-658664" has status "Ready":"False"
	I1101 00:36:07.352514   59907 node_ready.go:49] node "no-preload-658664" has status "Ready":"True"
	I1101 00:36:07.352542   59907 node_ready.go:38] duration metric: took 7.012901756s waiting for node "no-preload-658664" to be "Ready" ...
	I1101 00:36:07.352551   59907 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:36:07.360698   59907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lxp8r" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:07.891621   59907 pod_ready.go:92] pod "coredns-5dd5756b68-lxp8r" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:07.891649   59907 pod_ready.go:81] duration metric: took 530.924675ms waiting for pod "coredns-5dd5756b68-lxp8r" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:07.891664   59907 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:05.877132   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:05.877640   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:36:05.877672   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:36:05.877576   60524 retry.go:31] will retry after 2.600656876s: waiting for machine to come up
	I1101 00:36:08.479972   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:08.480631   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:36:08.480663   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:36:08.480540   60524 retry.go:31] will retry after 2.767316051s: waiting for machine to come up
	I1101 00:36:04.608110   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:04.608209   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:04.623800   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:05.108551   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:05.108648   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:05.120378   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:05.607646   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:05.607762   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:05.623016   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:06.107734   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:06.107832   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:06.123880   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:06.608301   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:06.608400   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:06.623179   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:07.108376   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:07.108465   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:07.123865   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:07.608570   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:07.608659   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:07.624401   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:08.107620   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:08.107682   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:08.124148   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:08.607616   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:08.607726   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:08.621530   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:09.107623   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:09.107829   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:09.120433   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:05.186036   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:07.186351   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:09.187892   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:09.915241   59907 pod_ready.go:102] pod "etcd-no-preload-658664" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:11.911696   59907 pod_ready.go:92] pod "etcd-no-preload-658664" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:11.911719   59907 pod_ready.go:81] duration metric: took 4.020047809s waiting for pod "etcd-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:11.911729   59907 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:11.917992   59907 pod_ready.go:92] pod "kube-apiserver-no-preload-658664" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:11.918013   59907 pod_ready.go:81] duration metric: took 6.278297ms waiting for pod "kube-apiserver-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:11.918021   59907 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:11.923803   59907 pod_ready.go:92] pod "kube-controller-manager-no-preload-658664" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:11.923830   59907 pod_ready.go:81] duration metric: took 5.801152ms waiting for pod "kube-controller-manager-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:11.923842   59907 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sl6wg" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:12.151900   59907 pod_ready.go:92] pod "kube-proxy-sl6wg" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:12.151930   59907 pod_ready.go:81] duration metric: took 228.080022ms waiting for pod "kube-proxy-sl6wg" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:12.151942   59907 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:12.552821   59907 pod_ready.go:92] pod "kube-scheduler-no-preload-658664" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:12.552847   59907 pod_ready.go:81] duration metric: took 400.896631ms waiting for pod "kube-scheduler-no-preload-658664" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:12.552863   59907 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:11.251692   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:11.252132   60145 main.go:141] libmachine: (embed-certs-503881) DBG | unable to find current IP address of domain embed-certs-503881 in network mk-embed-certs-503881
	I1101 00:36:11.252161   60145 main.go:141] libmachine: (embed-certs-503881) DBG | I1101 00:36:11.252072   60524 retry.go:31] will retry after 3.47351319s: waiting for machine to come up
	I1101 00:36:09.608263   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:09.608349   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:09.625552   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:10.108292   60028 api_server.go:166] Checking apiserver status ...
	I1101 00:36:10.108433   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:10.123659   60028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:10.583386   60028 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 00:36:10.583497   60028 kubeadm.go:1128] stopping kube-system containers ...
	I1101 00:36:10.583636   60028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 00:36:10.611799   60028 docker.go:470] Stopping containers: [3c4debad6b59 fe81e6f0b409 ef9f66c0a601 4144556ebe8e dcc61e63e192 c2f2e52e0a08 4a198a7c5758 f3b389a8a11e 7e5a99a7be08 b84432b848ad 4916f177861f 1bd7cdab2c0a bf2a5d5b14de 24fdabbd688b 9c58adb47b62]
	I1101 00:36:10.611898   60028 ssh_runner.go:195] Run: docker stop 3c4debad6b59 fe81e6f0b409 ef9f66c0a601 4144556ebe8e dcc61e63e192 c2f2e52e0a08 4a198a7c5758 f3b389a8a11e 7e5a99a7be08 b84432b848ad 4916f177861f 1bd7cdab2c0a bf2a5d5b14de 24fdabbd688b 9c58adb47b62
	I1101 00:36:10.639422   60028 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 00:36:10.658571   60028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 00:36:10.673083   60028 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:36:10.673149   60028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 00:36:10.686288   60028 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 00:36:10.686320   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:36:10.843995   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:36:12.132285   60028 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.288191516s)
	I1101 00:36:12.132334   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:36:12.349959   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:36:12.495326   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:36:12.600698   60028 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:36:12.600776   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:12.616805   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:13.135602   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:13.635186   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:14.135767   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:11.368208   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:13.690984   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:14.867214   59907 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:17.055114   59907 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:14.728199   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:14.728820   60145 main.go:141] libmachine: (embed-certs-503881) Found IP for machine: 192.168.61.122
	I1101 00:36:14.728842   60145 main.go:141] libmachine: (embed-certs-503881) Reserving static IP address...
	I1101 00:36:14.728869   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has current primary IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:14.729326   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "embed-certs-503881", mac: "52:54:00:1e:a3:e2", ip: "192.168.61.122"} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:14.729371   60145 main.go:141] libmachine: (embed-certs-503881) DBG | skip adding static IP to network mk-embed-certs-503881 - found existing host DHCP lease matching {name: "embed-certs-503881", mac: "52:54:00:1e:a3:e2", ip: "192.168.61.122"}
	I1101 00:36:14.729396   60145 main.go:141] libmachine: (embed-certs-503881) DBG | Getting to WaitForSSH function...
	I1101 00:36:14.729418   60145 main.go:141] libmachine: (embed-certs-503881) Reserved static IP address: 192.168.61.122
	I1101 00:36:14.729431   60145 main.go:141] libmachine: (embed-certs-503881) Waiting for SSH to be available...
	I1101 00:36:14.732296   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:14.732776   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:14.732818   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:14.733095   60145 main.go:141] libmachine: (embed-certs-503881) DBG | Using SSH client type: external
	I1101 00:36:14.733119   60145 main.go:141] libmachine: (embed-certs-503881) DBG | Using SSH private key: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/embed-certs-503881/id_rsa (-rw-------)
	I1101 00:36:14.733149   60145 main.go:141] libmachine: (embed-certs-503881) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.122 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17486-7251/.minikube/machines/embed-certs-503881/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1101 00:36:14.733183   60145 main.go:141] libmachine: (embed-certs-503881) DBG | About to run SSH command:
	I1101 00:36:14.733195   60145 main.go:141] libmachine: (embed-certs-503881) DBG | exit 0
	I1101 00:36:14.842818   60145 main.go:141] libmachine: (embed-certs-503881) DBG | SSH cmd err, output: <nil>: 
	I1101 00:36:14.843330   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetConfigRaw
	I1101 00:36:14.844070   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetIP
	I1101 00:36:14.847193   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:14.847616   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:14.847643   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:14.848112   60145 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/embed-certs-503881/config.json ...
	I1101 00:36:14.848359   60145 machine.go:88] provisioning docker machine ...
	I1101 00:36:14.848380   60145 main.go:141] libmachine: (embed-certs-503881) Calling .DriverName
	I1101 00:36:14.848597   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetMachineName
	I1101 00:36:14.848776   60145 buildroot.go:166] provisioning hostname "embed-certs-503881"
	I1101 00:36:14.848798   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetMachineName
	I1101 00:36:14.848921   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHHostname
	I1101 00:36:14.851824   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:14.852152   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:14.852183   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:14.852446   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHPort
	I1101 00:36:14.852649   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:14.852838   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:14.853001   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHUsername
	I1101 00:36:14.853190   60145 main.go:141] libmachine: Using SSH client type: native
	I1101 00:36:14.853700   60145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.122 22 <nil> <nil>}
	I1101 00:36:14.853722   60145 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-503881 && echo "embed-certs-503881" | sudo tee /etc/hostname
	I1101 00:36:15.014577   60145 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-503881
	
	I1101 00:36:15.014609   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHHostname
	I1101 00:36:15.018324   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:15.018838   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:15.018863   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:15.019218   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHPort
	I1101 00:36:15.019455   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:15.019606   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:15.019758   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHUsername
	I1101 00:36:15.019954   60145 main.go:141] libmachine: Using SSH client type: native
	I1101 00:36:15.020385   60145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.122 22 <nil> <nil>}
	I1101 00:36:15.020410   60145 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-503881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-503881/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-503881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:36:15.189416   60145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:36:15.189504   60145 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17486-7251/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-7251/.minikube}
	I1101 00:36:15.189557   60145 buildroot.go:174] setting up certificates
	I1101 00:36:15.189584   60145 provision.go:83] configureAuth start
	I1101 00:36:15.189603   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetMachineName
	I1101 00:36:15.189878   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetIP
	I1101 00:36:15.193129   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:15.193569   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:15.193611   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:15.193925   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHHostname
	I1101 00:36:15.196883   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:15.197304   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:15.197341   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:15.197504   60145 provision.go:138] copyHostCerts
	I1101 00:36:15.197637   60145 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem, removing ...
	I1101 00:36:15.197672   60145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem
	I1101 00:36:15.197759   60145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/key.pem (1675 bytes)
	I1101 00:36:15.197957   60145 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem, removing ...
	I1101 00:36:15.197972   60145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem
	I1101 00:36:15.198015   60145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/ca.pem (1082 bytes)
	I1101 00:36:15.198091   60145 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem, removing ...
	I1101 00:36:15.198101   60145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem
	I1101 00:36:15.198130   60145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-7251/.minikube/cert.pem (1123 bytes)
	I1101 00:36:15.198196   60145 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem org=jenkins.embed-certs-503881 san=[192.168.61.122 192.168.61.122 localhost 127.0.0.1 minikube embed-certs-503881]
	I1101 00:36:15.549020   60145 provision.go:172] copyRemoteCerts
	I1101 00:36:15.549133   60145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:36:15.549176   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHHostname
	I1101 00:36:15.552608   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:15.553116   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:15.553152   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:15.553388   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHPort
	I1101 00:36:15.553671   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:15.553847   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHUsername
	I1101 00:36:15.554029   60145 sshutil.go:53] new ssh client: &{IP:192.168.61.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/embed-certs-503881/id_rsa Username:docker}
	I1101 00:36:15.657942   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 00:36:15.689570   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 00:36:15.721916   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 00:36:15.759998   60145 provision.go:86] duration metric: configureAuth took 570.396184ms
	I1101 00:36:15.760041   60145 buildroot.go:189] setting minikube options for container-runtime
	I1101 00:36:15.760334   60145 config.go:182] Loaded profile config "embed-certs-503881": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:36:15.760383   60145 main.go:141] libmachine: (embed-certs-503881) Calling .DriverName
	I1101 00:36:15.760737   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHHostname
	I1101 00:36:15.764209   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:15.764662   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:15.764704   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:15.764922   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHPort
	I1101 00:36:15.765125   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:15.765325   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:15.765494   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHUsername
	I1101 00:36:15.765655   60145 main.go:141] libmachine: Using SSH client type: native
	I1101 00:36:15.766118   60145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.122 22 <nil> <nil>}
	I1101 00:36:15.766146   60145 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 00:36:15.909477   60145 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1101 00:36:15.909504   60145 buildroot.go:70] root file system type: tmpfs
	I1101 00:36:15.909651   60145 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 00:36:15.909678   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHHostname
	I1101 00:36:15.913318   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:15.913810   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:15.914105   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHPort
	I1101 00:36:15.914361   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:15.914406   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:15.914610   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:15.914777   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHUsername
	I1101 00:36:15.914978   60145 main.go:141] libmachine: Using SSH client type: native
	I1101 00:36:15.915495   60145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.122 22 <nil> <nil>}
	I1101 00:36:15.915656   60145 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 00:36:16.073909   60145 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 00:36:16.074007   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHHostname
	I1101 00:36:16.077434   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:16.077913   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:16.077942   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:16.078170   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHPort
	I1101 00:36:16.078384   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:16.079019   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:16.079217   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHUsername
	I1101 00:36:16.079392   60145 main.go:141] libmachine: Using SSH client type: native
	I1101 00:36:16.079840   60145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.122 22 <nil> <nil>}
	I1101 00:36:16.079873   60145 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 00:36:17.539181   60145 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1101 00:36:17.539215   60145 machine.go:91] provisioned docker machine in 2.690843783s
	I1101 00:36:17.539251   60145 start.go:300] post-start starting for "embed-certs-503881" (driver="kvm2")
	I1101 00:36:17.539264   60145 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:36:17.539297   60145 main.go:141] libmachine: (embed-certs-503881) Calling .DriverName
	I1101 00:36:17.539796   60145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:36:17.539859   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHHostname
	I1101 00:36:17.545625   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:17.546057   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:17.546116   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:17.546250   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHPort
	I1101 00:36:17.546448   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:17.546667   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHUsername
	I1101 00:36:17.546821   60145 sshutil.go:53] new ssh client: &{IP:192.168.61.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/embed-certs-503881/id_rsa Username:docker}
	I1101 00:36:17.650670   60145 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:36:17.655319   60145 info.go:137] Remote host: Buildroot 2021.02.12
	I1101 00:36:17.655346   60145 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/addons for local assets ...
	I1101 00:36:17.655409   60145 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-7251/.minikube/files for local assets ...
	I1101 00:36:17.655501   60145 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem -> 144632.pem in /etc/ssl/certs
	I1101 00:36:17.655620   60145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:36:17.667330   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:36:17.712374   60145 start.go:303] post-start completed in 173.099113ms
	I1101 00:36:17.712402   60145 fix.go:56] fixHost completed within 21.432956424s
	I1101 00:36:17.712427   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHHostname
	I1101 00:36:17.716026   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:17.716547   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:17.716599   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:17.716953   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHPort
	I1101 00:36:17.717192   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:17.717390   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:17.717565   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHUsername
	I1101 00:36:17.717735   60145 main.go:141] libmachine: Using SSH client type: native
	I1101 00:36:17.718294   60145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.122 22 <nil> <nil>}
	I1101 00:36:17.718315   60145 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1101 00:36:17.852205   60145 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698798977.832175828
	
	I1101 00:36:17.852234   60145 fix.go:206] guest clock: 1698798977.832175828
	I1101 00:36:17.852244   60145 fix.go:219] Guest: 2023-11-01 00:36:17.832175828 +0000 UTC Remote: 2023-11-01 00:36:17.712406087 +0000 UTC m=+68.525506044 (delta=119.769741ms)
	I1101 00:36:17.852287   60145 fix.go:190] guest clock delta is within tolerance: 119.769741ms
	I1101 00:36:17.852297   60145 start.go:83] releasing machines lock for "embed-certs-503881", held for 21.572887243s
	I1101 00:36:17.852323   60145 main.go:141] libmachine: (embed-certs-503881) Calling .DriverName
	I1101 00:36:17.852633   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetIP
	I1101 00:36:17.855774   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:17.856184   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:17.856214   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:17.856417   60145 main.go:141] libmachine: (embed-certs-503881) Calling .DriverName
	I1101 00:36:17.856983   60145 main.go:141] libmachine: (embed-certs-503881) Calling .DriverName
	I1101 00:36:17.857180   60145 main.go:141] libmachine: (embed-certs-503881) Calling .DriverName
	I1101 00:36:17.857288   60145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:36:17.857355   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHHostname
	I1101 00:36:17.857515   60145 ssh_runner.go:195] Run: cat /version.json
	I1101 00:36:17.857542   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHHostname
	I1101 00:36:17.860551   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:17.860710   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:17.861171   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:17.861252   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:17.861282   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:17.861320   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:17.861538   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHPort
	I1101 00:36:17.861641   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHPort
	I1101 00:36:17.861739   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:17.861826   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHKeyPath
	I1101 00:36:17.861898   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHUsername
	I1101 00:36:17.862099   60145 sshutil.go:53] new ssh client: &{IP:192.168.61.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/embed-certs-503881/id_rsa Username:docker}
	I1101 00:36:17.862108   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetSSHUsername
	I1101 00:36:17.862283   60145 sshutil.go:53] new ssh client: &{IP:192.168.61.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/embed-certs-503881/id_rsa Username:docker}
	I1101 00:36:17.984965   60145 ssh_runner.go:195] Run: systemctl --version
	I1101 00:36:17.991338   60145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 00:36:17.998703   60145 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 00:36:17.998769   60145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:36:18.013824   60145 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 00:36:18.013856   60145 start.go:472] detecting cgroup driver to use...
	I1101 00:36:18.014009   60145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:36:18.032887   60145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1101 00:36:18.042087   60145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1101 00:36:18.052054   60145 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1101 00:36:18.052130   60145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1101 00:36:18.064032   60145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:36:18.073855   60145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1101 00:36:18.085138   60145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 00:36:18.094562   60145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:36:18.103816   60145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1101 00:36:18.116350   60145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:36:18.126911   60145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:36:18.138597   60145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:36:18.273982   60145 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 00:36:18.298132   60145 start.go:472] detecting cgroup driver to use...
	I1101 00:36:18.298224   60145 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 00:36:18.322364   60145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:36:18.345674   60145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:36:18.373545   60145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:36:18.389685   60145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:36:18.406415   60145 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 00:36:18.444202   60145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 00:36:18.460706   60145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:36:18.482272   60145 ssh_runner.go:195] Run: which cri-dockerd
	I1101 00:36:18.487424   60145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1101 00:36:18.499597   60145 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1101 00:36:18.521591   60145 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 00:36:18.663200   60145 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 00:36:18.848170   60145 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1101 00:36:18.848402   60145 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1101 00:36:18.873754   60145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:36:19.018264   60145 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 00:36:14.635240   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:15.135538   60028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:15.206058   60028 api_server.go:72] duration metric: took 2.605356977s to wait for apiserver process to appear ...
	I1101 00:36:15.206085   60028 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:36:15.206104   60028 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8444/healthz ...
	I1101 00:36:15.206786   60028 api_server.go:269] stopped: https://192.168.72.142:8444/healthz: Get "https://192.168.72.142:8444/healthz": dial tcp 192.168.72.142:8444: connect: connection refused
	I1101 00:36:15.206827   60028 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8444/healthz ...
	I1101 00:36:15.207558   60028 api_server.go:269] stopped: https://192.168.72.142:8444/healthz: Get "https://192.168.72.142:8444/healthz": dial tcp 192.168.72.142:8444: connect: connection refused
	I1101 00:36:15.708005   60028 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8444/healthz ...
	I1101 00:36:16.186962   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:18.187172   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:20.631217   60145 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.612908026s)
	I1101 00:36:20.631296   60145 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:36:20.773885   60145 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1101 00:36:20.906307   60145 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 00:36:21.025768   60145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:36:21.146455   60145 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1101 00:36:21.169474   60145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:36:21.330703   60145 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1101 00:36:21.432204   60145 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1101 00:36:21.432299   60145 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1101 00:36:21.439791   60145 start.go:540] Will wait 60s for crictl version
	I1101 00:36:21.439855   60145 ssh_runner.go:195] Run: which crictl
	I1101 00:36:21.443899   60145 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:36:21.505559   60145 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1101 00:36:21.505629   60145 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 00:36:21.538626   60145 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 00:36:19.945756   60028 api_server.go:279] https://192.168.72.142:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 00:36:19.945785   60028 api_server.go:103] status: https://192.168.72.142:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 00:36:19.945796   60028 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8444/healthz ...
	I1101 00:36:19.990280   60028 api_server.go:279] https://192.168.72.142:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 00:36:19.990316   60028 api_server.go:103] status: https://192.168.72.142:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 00:36:20.208676   60028 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8444/healthz ...
	I1101 00:36:20.216463   60028 api_server.go:279] https://192.168.72.142:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 00:36:20.216496   60028 api_server.go:103] status: https://192.168.72.142:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 00:36:20.707968   60028 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8444/healthz ...
	I1101 00:36:20.715684   60028 api_server.go:279] https://192.168.72.142:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 00:36:20.715722   60028 api_server.go:103] status: https://192.168.72.142:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 00:36:21.207735   60028 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8444/healthz ...
	I1101 00:36:21.215038   60028 api_server.go:279] https://192.168.72.142:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 00:36:21.215075   60028 api_server.go:103] status: https://192.168.72.142:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 00:36:21.708655   60028 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8444/healthz ...
	I1101 00:36:21.716561   60028 api_server.go:279] https://192.168.72.142:8444/healthz returned 200:
	ok
	I1101 00:36:21.733758   60028 api_server.go:141] control plane version: v1.28.3
	I1101 00:36:21.733800   60028 api_server.go:131] duration metric: took 6.527706397s to wait for apiserver health ...
	I1101 00:36:21.733814   60028 cni.go:84] Creating CNI manager for ""
	I1101 00:36:21.733835   60028 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 00:36:21.736055   60028 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 00:36:19.361467   59907 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:21.361793   59907 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:21.737850   60028 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 00:36:21.752677   60028 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1101 00:36:21.789120   60028 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:36:21.802192   60028 system_pods.go:59] 8 kube-system pods found
	I1101 00:36:21.802233   60028 system_pods.go:61] "coredns-5dd5756b68-fgw5x" [5a3fb229-476c-4c44-b3f2-498b9571b17d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 00:36:21.802246   60028 system_pods.go:61] "etcd-default-k8s-diff-port-195256" [f32598e1-0960-49fa-81d8-2e0951e51f80] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 00:36:21.802263   60028 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-195256" [cba81316-0525-4d86-adb5-deeeb8a3ae53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 00:36:21.802317   60028 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-195256" [5bc2fff3-07cd-4af8-a48c-0efd1826ceac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 00:36:21.802333   60028 system_pods.go:61] "kube-proxy-4g4mh" [f9239173-3c01-4e2e-a680-edca882276fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 00:36:21.802361   60028 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-195256" [55d1aa52-18d3-432b-8047-707b63d7a600] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 00:36:21.802377   60028 system_pods.go:61] "metrics-server-57f55c9bc5-t5dsp" [182f5f28-3364-40de-a7db-8d7d41abc622] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 00:36:21.802386   60028 system_pods.go:61] "storage-provisioner" [462aed18-befb-49f0-99d6-205fa7bd888d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 00:36:21.802405   60028 system_pods.go:74] duration metric: took 13.258176ms to wait for pod list to return data ...
	I1101 00:36:21.802418   60028 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:36:21.806250   60028 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:36:21.806286   60028 node_conditions.go:123] node cpu capacity is 2
	I1101 00:36:21.806299   60028 node_conditions.go:105] duration metric: took 3.871321ms to run NodePressure ...
	I1101 00:36:21.806326   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:36:22.390431   60028 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1101 00:36:22.397579   60028 kubeadm.go:787] kubelet initialised
	I1101 00:36:22.397608   60028 kubeadm.go:788] duration metric: took 7.153685ms waiting for restarted kubelet to initialise ...
	I1101 00:36:22.397617   60028 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:36:22.403230   60028 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-fgw5x" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:22.410711   60028 pod_ready.go:97] node "default-k8s-diff-port-195256" hosting pod "coredns-5dd5756b68-fgw5x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:22.410738   60028 pod_ready.go:81] duration metric: took 7.488812ms waiting for pod "coredns-5dd5756b68-fgw5x" in "kube-system" namespace to be "Ready" ...
	E1101 00:36:22.410749   60028 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-195256" hosting pod "coredns-5dd5756b68-fgw5x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:22.410759   60028 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:22.421124   60028 pod_ready.go:97] node "default-k8s-diff-port-195256" hosting pod "etcd-default-k8s-diff-port-195256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:22.421156   60028 pod_ready.go:81] duration metric: took 10.388245ms waiting for pod "etcd-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	E1101 00:36:22.421175   60028 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-195256" hosting pod "etcd-default-k8s-diff-port-195256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:22.421188   60028 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:22.433767   60028 pod_ready.go:97] node "default-k8s-diff-port-195256" hosting pod "kube-apiserver-default-k8s-diff-port-195256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:22.433798   60028 pod_ready.go:81] duration metric: took 12.597367ms waiting for pod "kube-apiserver-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	E1101 00:36:22.433811   60028 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-195256" hosting pod "kube-apiserver-default-k8s-diff-port-195256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:22.433821   60028 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:22.451508   60028 pod_ready.go:97] node "default-k8s-diff-port-195256" hosting pod "kube-controller-manager-default-k8s-diff-port-195256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:22.451543   60028 pod_ready.go:81] duration metric: took 17.70923ms waiting for pod "kube-controller-manager-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	E1101 00:36:22.451553   60028 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-195256" hosting pod "kube-controller-manager-default-k8s-diff-port-195256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:22.451561   60028 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4g4mh" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:22.794218   60028 pod_ready.go:97] node "default-k8s-diff-port-195256" hosting pod "kube-proxy-4g4mh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:22.794254   60028 pod_ready.go:81] duration metric: took 342.685288ms waiting for pod "kube-proxy-4g4mh" in "kube-system" namespace to be "Ready" ...
	E1101 00:36:22.794268   60028 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-195256" hosting pod "kube-proxy-4g4mh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:22.794278   60028 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:23.194757   60028 pod_ready.go:97] node "default-k8s-diff-port-195256" hosting pod "kube-scheduler-default-k8s-diff-port-195256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:23.194797   60028 pod_ready.go:81] duration metric: took 400.5067ms waiting for pod "kube-scheduler-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	E1101 00:36:23.194812   60028 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-195256" hosting pod "kube-scheduler-default-k8s-diff-port-195256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:23.194822   60028 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-t5dsp" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:23.595116   60028 pod_ready.go:97] node "default-k8s-diff-port-195256" hosting pod "metrics-server-57f55c9bc5-t5dsp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:23.595144   60028 pod_ready.go:81] duration metric: took 400.313313ms waiting for pod "metrics-server-57f55c9bc5-t5dsp" in "kube-system" namespace to be "Ready" ...
	E1101 00:36:23.595154   60028 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-195256" hosting pod "metrics-server-57f55c9bc5-t5dsp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-195256" has status "Ready":"False"
	I1101 00:36:23.595175   60028 pod_ready.go:38] duration metric: took 1.197534602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:36:23.595191   60028 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 00:36:23.609784   60028 ops.go:34] apiserver oom_adj: -16
	I1101 00:36:23.609816   60028 kubeadm.go:640] restartCluster took 23.05402896s
	I1101 00:36:23.609824   60028 kubeadm.go:406] StartCluster complete in 23.082921287s
	I1101 00:36:23.609863   60028 settings.go:142] acquiring lock: {Name:mk57c659cffa0c6a1b184e5906c662f85ff8a099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:36:23.609946   60028 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:36:23.611766   60028 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:36:23.612033   60028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 00:36:23.612048   60028 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 00:36:23.612149   60028 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-195256"
	I1101 00:36:23.612168   60028 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-195256"
	W1101 00:36:23.612183   60028 addons.go:240] addon storage-provisioner should already be in state true
	I1101 00:36:23.612220   60028 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-195256"
	I1101 00:36:23.612240   60028 host.go:66] Checking if "default-k8s-diff-port-195256" exists ...
	I1101 00:36:23.612244   60028 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-195256"
	I1101 00:36:23.612249   60028 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-195256"
	I1101 00:36:23.612271   60028 config.go:182] Loaded profile config "default-k8s-diff-port-195256": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:36:23.612275   60028 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-195256"
	I1101 00:36:23.612298   60028 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-195256"
	I1101 00:36:23.612321   60028 addons.go:231] Setting addon dashboard=true in "default-k8s-diff-port-195256"
	W1101 00:36:23.612338   60028 addons.go:240] addon dashboard should already be in state true
	W1101 00:36:23.612265   60028 addons.go:240] addon metrics-server should already be in state true
	I1101 00:36:23.612351   60028 cache.go:107] acquiring lock: {Name:mkc5ed527821f669fe42d90dc96f9db56fa3565a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:36:23.612422   60028 cache.go:115] /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1101 00:36:23.612428   60028 host.go:66] Checking if "default-k8s-diff-port-195256" exists ...
	I1101 00:36:23.612432   60028 host.go:66] Checking if "default-k8s-diff-port-195256" exists ...
	I1101 00:36:23.612432   60028 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 85.949µs
	I1101 00:36:23.612445   60028 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17486-7251/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1101 00:36:23.612452   60028 cache.go:87] Successfully saved all images to host disk.
	I1101 00:36:23.612661   60028 config.go:182] Loaded profile config "default-k8s-diff-port-195256": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:36:23.612719   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:36:23.612756   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:36:23.612843   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:36:23.612853   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:36:23.612866   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:36:23.612877   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:36:23.612885   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:36:23.612897   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:36:23.613139   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:36:23.613164   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:36:23.619843   60028 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-195256" context rescaled to 1 replicas
	I1101 00:36:23.619906   60028 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.142 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 00:36:23.623289   60028 out.go:177] * Verifying Kubernetes components...
	I1101 00:36:23.626358   60028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:36:23.632134   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45479
	I1101 00:36:23.632231   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
	I1101 00:36:23.632366   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I1101 00:36:23.632477   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I1101 00:36:23.632797   60028 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:23.632936   60028 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:23.634110   60028 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:23.634217   60028 main.go:141] libmachine: Using API Version  1
	I1101 00:36:23.634235   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:23.634278   60028 main.go:141] libmachine: Using API Version  1
	I1101 00:36:23.634296   60028 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:23.634301   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:23.634672   60028 main.go:141] libmachine: Using API Version  1
	I1101 00:36:23.634689   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:23.634770   60028 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:23.634811   60028 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:23.634840   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33565
	I1101 00:36:23.634960   60028 main.go:141] libmachine: Using API Version  1
	I1101 00:36:23.634978   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:23.634994   60028 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:23.635121   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetState
	I1101 00:36:23.635602   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:36:23.635642   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:36:23.635757   60028 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:23.635859   60028 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:23.636275   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:36:23.636312   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:36:23.636598   60028 main.go:141] libmachine: Using API Version  1
	I1101 00:36:23.636617   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:23.636896   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:36:23.636946   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:36:23.637305   60028 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:23.637799   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetState
	I1101 00:36:23.638538   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:36:23.638583   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:36:23.641050   60028 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-195256"
	W1101 00:36:23.641073   60028 addons.go:240] addon default-storageclass should already be in state true
	I1101 00:36:23.641099   60028 host.go:66] Checking if "default-k8s-diff-port-195256" exists ...
	I1101 00:36:23.641542   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:36:23.641585   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:36:23.656826   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42137
	I1101 00:36:23.656882   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41211
	I1101 00:36:23.657453   60028 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:23.657491   60028 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:23.658016   60028 main.go:141] libmachine: Using API Version  1
	I1101 00:36:23.658043   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:23.658189   60028 main.go:141] libmachine: Using API Version  1
	I1101 00:36:23.658200   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:23.658587   60028 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:23.658678   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41233
	I1101 00:36:23.658969   60028 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:23.659035   60028 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:23.659242   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetState
	I1101 00:36:23.659431   60028 main.go:141] libmachine: Using API Version  1
	I1101 00:36:23.659449   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:23.659860   60028 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:23.660286   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetState
	I1101 00:36:23.660774   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetState
	I1101 00:36:23.661114   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:36:23.663481   60028 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1101 00:36:23.665150   60028 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 00:36:23.666605   60028 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 00:36:23.666624   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 00:36:23.666644   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:36:23.665120   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:36:23.664579   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40067
	I1101 00:36:23.665643   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:36:23.669218   60028 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 00:36:23.667486   60028 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:23.668842   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42059
	I1101 00:36:23.669872   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:36:23.670535   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:36:23.670837   60028 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 00:36:23.670995   60028 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:36:21.569511   60145 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1101 00:36:21.569667   60145 main.go:141] libmachine: (embed-certs-503881) Calling .GetIP
	I1101 00:36:21.573086   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:21.573464   60145 main.go:141] libmachine: (embed-certs-503881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:a3:e2", ip: ""} in network mk-embed-certs-503881: {Iface:virbr4 ExpiryTime:2023-11-01 01:33:15 +0000 UTC Type:0 Mac:52:54:00:1e:a3:e2 Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:embed-certs-503881 Clientid:01:52:54:00:1e:a3:e2}
	I1101 00:36:21.573514   60145 main.go:141] libmachine: (embed-certs-503881) DBG | domain embed-certs-503881 has defined IP address 192.168.61.122 and MAC address 52:54:00:1e:a3:e2 in network mk-embed-certs-503881
	I1101 00:36:21.573689   60145 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1101 00:36:21.578769   60145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:36:21.592344   60145 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1101 00:36:21.592446   60145 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:36:21.617192   60145 docker.go:699] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1101 00:36:21.617220   60145 docker.go:629] Images already preloaded, skipping extraction
	I1101 00:36:21.617280   60145 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:36:21.639411   60145 docker.go:699] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1101 00:36:21.639458   60145 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:36:21.639525   60145 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 00:36:21.674135   60145 cni.go:84] Creating CNI manager for ""
	I1101 00:36:21.674163   60145 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1101 00:36:21.674182   60145 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:36:21.674205   60145 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.122 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-503881 NodeName:embed-certs-503881 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:36:21.674379   60145 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.122
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-503881"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.122
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.122"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:36:21.674467   60145 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-503881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-503881 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:36:21.674561   60145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:36:21.687211   60145 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:36:21.687297   60145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:36:21.697245   60145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1101 00:36:21.721600   60145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:36:21.754711   60145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
	I1101 00:36:21.774120   60145 ssh_runner.go:195] Run: grep 192.168.61.122	control-plane.minikube.internal$ /etc/hosts
	I1101 00:36:21.778182   60145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.122	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:36:21.790478   60145 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/embed-certs-503881 for IP: 192.168.61.122
	I1101 00:36:21.790537   60145 certs.go:190] acquiring lock for shared ca certs: {Name:mkd78a553474b872bb63abf547b6fa0a317dc3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:36:21.790709   60145 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key
	I1101 00:36:21.790771   60145 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key
	I1101 00:36:21.790885   60145 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/embed-certs-503881/client.key
	I1101 00:36:21.790971   60145 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/embed-certs-503881/apiserver.key.a4a52458
	I1101 00:36:21.791013   60145 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/embed-certs-503881/proxy-client.key
	I1101 00:36:21.791167   60145 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem (1338 bytes)
	W1101 00:36:21.791212   60145 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463_empty.pem, impossibly tiny 0 bytes
	I1101 00:36:21.791227   60145 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:36:21.791265   60145 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/ca.pem (1082 bytes)
	I1101 00:36:21.791302   60145 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:36:21.791339   60145 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/certs/home/jenkins/minikube-integration/17486-7251/.minikube/certs/key.pem (1675 bytes)
	I1101 00:36:21.791401   60145 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem (1708 bytes)
	I1101 00:36:21.792009   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/embed-certs-503881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:36:21.822837   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/embed-certs-503881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 00:36:21.847561   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/embed-certs-503881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:36:21.872432   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/embed-certs-503881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 00:36:21.902267   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:36:21.931148   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 00:36:21.955852   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:36:21.983334   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:36:22.013099   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:36:22.040565   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/certs/14463.pem --> /usr/share/ca-certificates/14463.pem (1338 bytes)
	I1101 00:36:22.070898   60145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/ssl/certs/144632.pem --> /usr/share/ca-certificates/144632.pem (1708 bytes)
	I1101 00:36:22.099256   60145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:36:22.121308   60145 ssh_runner.go:195] Run: openssl version
	I1101 00:36:22.129042   60145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:36:22.141735   60145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:36:22.148384   60145 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:36:22.148456   60145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:36:22.154667   60145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:36:22.165430   60145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14463.pem && ln -fs /usr/share/ca-certificates/14463.pem /etc/ssl/certs/14463.pem"
	I1101 00:36:22.177668   60145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14463.pem
	I1101 00:36:22.183745   60145 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 23:48 /usr/share/ca-certificates/14463.pem
	I1101 00:36:22.183813   60145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14463.pem
	I1101 00:36:22.190263   60145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14463.pem /etc/ssl/certs/51391683.0"
	I1101 00:36:22.203938   60145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144632.pem && ln -fs /usr/share/ca-certificates/144632.pem /etc/ssl/certs/144632.pem"
	I1101 00:36:22.218406   60145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144632.pem
	I1101 00:36:22.223262   60145 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 23:48 /usr/share/ca-certificates/144632.pem
	I1101 00:36:22.223356   60145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144632.pem
	I1101 00:36:22.230093   60145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144632.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:36:22.241032   60145 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:36:22.246335   60145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 00:36:22.253161   60145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 00:36:22.260294   60145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 00:36:22.266565   60145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 00:36:22.273016   60145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 00:36:22.280386   60145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 00:36:22.287812   60145 kubeadm.go:404] StartCluster: {Name:embed-certs-503881 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-503881 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.122 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Ne
twork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:36:22.287935   60145 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 00:36:22.313831   60145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 00:36:22.323850   60145 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1101 00:36:22.323876   60145 kubeadm.go:636] restartCluster start
	I1101 00:36:22.323932   60145 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 00:36:22.336024   60145 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:22.336958   60145 kubeconfig.go:135] verify returned: extract IP: "embed-certs-503881" does not appear in /home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1101 00:36:22.337391   60145 kubeconfig.go:146] "embed-certs-503881" context is missing from /home/jenkins/minikube-integration/17486-7251/kubeconfig - will repair!
	I1101 00:36:22.338039   60145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-7251/kubeconfig: {Name:mk525de6243b20b40961c1a878f4272a26e9a097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:36:22.340124   60145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 00:36:22.352375   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:22.352437   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:22.369324   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:22.369355   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:22.369435   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:22.384841   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:22.885564   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:22.885671   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:22.902032   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:23.385796   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:23.385880   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:23.401476   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:23.885001   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:23.885104   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:23.902025   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:23.673388   60028 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:36:23.673403   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 00:36:23.670999   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 00:36:23.673419   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:36:23.673438   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:36:23.671076   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:36:23.671167   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:36:23.671466   60028 main.go:141] libmachine: Using API Version  1
	I1101 00:36:23.671510   60028 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:23.673489   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:36:23.673514   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:23.673983   60028 main.go:141] libmachine: Using API Version  1
	I1101 00:36:23.673998   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:23.674044   60028 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:23.674262   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:36:23.674398   60028 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:23.674537   60028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 00:36:23.674558   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:36:23.674970   60028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:36:23.675011   60028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:36:23.675627   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:36:23.676088   60028 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/default-k8s-diff-port-195256/id_rsa Username:docker}
	I1101 00:36:23.677602   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:36:23.677916   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:36:23.677938   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:36:23.678114   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:36:23.678251   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:36:23.678355   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:36:23.678438   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:36:23.678467   60028 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/default-k8s-diff-port-195256/id_rsa Username:docker}
	I1101 00:36:23.678892   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:36:23.678930   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:36:23.679096   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:36:23.679270   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:36:23.679443   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:36:23.679598   60028 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/default-k8s-diff-port-195256/id_rsa Username:docker}
	I1101 00:36:23.684442   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:36:23.684868   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:36:23.684909   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:36:23.685124   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:36:23.685436   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:36:23.685600   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:36:23.685743   60028 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/default-k8s-diff-port-195256/id_rsa Username:docker}
	I1101 00:36:23.716995   60028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I1101 00:36:23.717423   60028 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:36:23.717917   60028 main.go:141] libmachine: Using API Version  1
	I1101 00:36:23.717948   60028 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:36:23.718392   60028 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:36:23.718602   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetState
	I1101 00:36:23.720589   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .DriverName
	I1101 00:36:23.720943   60028 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 00:36:23.720962   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 00:36:23.720982   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHHostname
	I1101 00:36:23.724187   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:36:23.724624   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f6:1c", ip: ""} in network mk-default-k8s-diff-port-195256: {Iface:virbr2 ExpiryTime:2023-11-01 01:35:47 +0000 UTC Type:0 Mac:52:54:00:ff:f6:1c Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:default-k8s-diff-port-195256 Clientid:01:52:54:00:ff:f6:1c}
	I1101 00:36:23.724650   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | domain default-k8s-diff-port-195256 has defined IP address 192.168.72.142 and MAC address 52:54:00:ff:f6:1c in network mk-default-k8s-diff-port-195256
	I1101 00:36:23.724846   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHPort
	I1101 00:36:23.725045   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHKeyPath
	I1101 00:36:23.725227   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .GetSSHUsername
	I1101 00:36:23.725356   60028 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/default-k8s-diff-port-195256/id_rsa Username:docker}
	I1101 00:36:23.838352   60028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:36:23.893933   60028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 00:36:23.910137   60028 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 00:36:23.910158   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 00:36:23.939403   60028 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 00:36:23.939432   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 00:36:24.018254   60028 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 00:36:24.018282   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 00:36:24.022461   60028 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1101 00:36:24.022470   60028 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-195256" to be "Ready" ...
	I1101 00:36:24.022470   60028 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 00:36:24.022519   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 00:36:24.022617   60028 docker.go:699] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1101 00:36:24.022637   60028 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:36:24.022645   60028 cache_images.go:262] succeeded pushing to: default-k8s-diff-port-195256
	I1101 00:36:24.022650   60028 cache_images.go:263] failed pushing to: 
	I1101 00:36:24.022683   60028 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:24.022697   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .Close
	I1101 00:36:24.022983   60028 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:24.023024   60028 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:24.023058   60028 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:24.023078   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .Close
	I1101 00:36:24.023363   60028 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:24.023385   60028 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:24.023392   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | Closing plugin on server side
	I1101 00:36:24.084473   60028 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 00:36:24.084500   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 00:36:24.107614   60028 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 00:36:24.107636   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 00:36:24.272488   60028 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 00:36:24.272514   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 00:36:24.293839   60028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 00:36:24.321541   60028 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 00:36:24.321574   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 00:36:24.413666   60028 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 00:36:24.413696   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 00:36:24.448674   60028 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 00:36:24.448701   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 00:36:24.472078   60028 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 00:36:24.472106   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 00:36:24.508976   60028 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 00:36:24.509018   60028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 00:36:24.530492   60028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 00:36:20.688547   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:22.692752   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:25.037454   60028 node_ready.go:49] node "default-k8s-diff-port-195256" has status "Ready":"True"
	I1101 00:36:25.037486   60028 node_ready.go:38] duration metric: took 1.014995076s waiting for node "default-k8s-diff-port-195256" to be "Ready" ...
	I1101 00:36:25.037499   60028 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:36:25.047818   60028 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fgw5x" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:26.076755   60028 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.182786252s)
	I1101 00:36:26.076812   60028 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:26.076827   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .Close
	I1101 00:36:26.076872   60028 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.238489261s)
	I1101 00:36:26.076899   60028 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:26.076910   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .Close
	I1101 00:36:26.077121   60028 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:26.077289   60028 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:26.077334   60028 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:26.077333   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | Closing plugin on server side
	I1101 00:36:26.077347   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .Close
	I1101 00:36:26.077266   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | Closing plugin on server side
	I1101 00:36:26.077189   60028 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:26.077431   60028 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:26.077458   60028 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:26.077483   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .Close
	I1101 00:36:26.077586   60028 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:26.077605   60028 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:26.079044   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | Closing plugin on server side
	I1101 00:36:26.079148   60028 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:26.079167   60028 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:26.085244   60028 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:26.085285   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .Close
	I1101 00:36:26.085603   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | Closing plugin on server side
	I1101 00:36:26.085652   60028 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:26.085667   60028 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:26.212291   60028 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.918405988s)
	I1101 00:36:26.212354   60028 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:26.212370   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .Close
	I1101 00:36:26.212662   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | Closing plugin on server side
	I1101 00:36:26.212695   60028 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:26.212711   60028 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:26.212725   60028 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:26.212746   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .Close
	I1101 00:36:26.213034   60028 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:26.213055   60028 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:26.213067   60028 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-195256"
	I1101 00:36:26.637986   60028 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.107431597s)
	I1101 00:36:26.638075   60028 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:26.638096   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .Close
	I1101 00:36:26.638471   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | Closing plugin on server side
	I1101 00:36:26.638518   60028 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:26.638530   60028 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:26.638541   60028 main.go:141] libmachine: Making call to close driver server
	I1101 00:36:26.638551   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) Calling .Close
	I1101 00:36:26.638821   60028 main.go:141] libmachine: (default-k8s-diff-port-195256) DBG | Closing plugin on server side
	I1101 00:36:26.638854   60028 main.go:141] libmachine: Successfully made call to close driver server
	I1101 00:36:26.638865   60028 main.go:141] libmachine: Making call to close connection to plugin binary
	I1101 00:36:26.640559   60028 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-195256 addons enable metrics-server	
	
	
	I1101 00:36:26.642129   60028 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1101 00:36:23.861593   59907 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:26.400253   59907 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:25.187057   59728 pod_ready.go:102] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:27.686035   59728 pod_ready.go:92] pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:27.686060   59728 pod_ready.go:81] duration metric: took 38.580261242s waiting for pod "coredns-5644d7b6d9-67f7c" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:27.686069   59728 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-kj7pf" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:27.687962   59728 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-kj7pf" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-kj7pf" not found
	I1101 00:36:27.687998   59728 pod_ready.go:81] duration metric: took 1.919558ms waiting for pod "coredns-5644d7b6d9-kj7pf" in "kube-system" namespace to be "Ready" ...
	E1101 00:36:27.688010   59728 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-kj7pf" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-kj7pf" not found
	I1101 00:36:27.688019   59728 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:27.692310   59728 pod_ready.go:92] pod "etcd-old-k8s-version-993392" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:27.692334   59728 pod_ready.go:81] duration metric: took 4.306668ms waiting for pod "etcd-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:27.692347   59728 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:27.696848   59728 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-993392" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:27.696874   59728 pod_ready.go:81] duration metric: took 4.518393ms waiting for pod "kube-apiserver-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:27.696887   59728 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:27.704670   59728 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-993392" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:27.704699   59728 pod_ready.go:81] duration metric: took 7.801142ms waiting for pod "kube-controller-manager-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:27.704712   59728 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qzxd" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:27.883177   59728 pod_ready.go:92] pod "kube-proxy-6qzxd" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:27.883205   59728 pod_ready.go:81] duration metric: took 178.483063ms waiting for pod "kube-proxy-6qzxd" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:27.883220   59728 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:28.282980   59728 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-993392" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:28.283014   59728 pod_ready.go:81] duration metric: took 399.785279ms waiting for pod "kube-scheduler-old-k8s-version-993392" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:28.283029   59728 pod_ready.go:38] duration metric: took 39.187179405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:36:28.283053   59728 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:36:28.283098   59728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:28.298623   59728 api_server.go:72] duration metric: took 39.782522934s to wait for apiserver process to appear ...
	I1101 00:36:28.298646   59728 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:36:28.298660   59728 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1101 00:36:28.304951   59728 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1101 00:36:28.305716   59728 api_server.go:141] control plane version: v1.16.0
	I1101 00:36:28.305736   59728 api_server.go:131] duration metric: took 7.083574ms to wait for apiserver health ...
	I1101 00:36:28.305745   59728 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:36:28.497370   59728 system_pods.go:59] 8 kube-system pods found
	I1101 00:36:28.497403   59728 system_pods.go:61] "coredns-5644d7b6d9-67f7c" [2d312387-7c72-428b-807c-3a200439f116] Running
	I1101 00:36:28.497410   59728 system_pods.go:61] "etcd-old-k8s-version-993392" [7eefc8f6-b708-4d05-849a-8d15a4cabb86] Running
	I1101 00:36:28.497415   59728 system_pods.go:61] "kube-apiserver-old-k8s-version-993392" [e646f5fb-7a3e-4db5-b5f8-d255bc946d12] Running
	I1101 00:36:28.497421   59728 system_pods.go:61] "kube-controller-manager-old-k8s-version-993392" [663a0c13-d3ae-46aa-85a7-b1cca0995a50] Running
	I1101 00:36:28.497426   59728 system_pods.go:61] "kube-proxy-6qzxd" [938e4a3a-f590-426f-9856-62d7307d3d75] Running
	I1101 00:36:28.497432   59728 system_pods.go:61] "kube-scheduler-old-k8s-version-993392" [315d1110-59c8-4133-abab-65aba4e1304c] Running
	I1101 00:36:28.497447   59728 system_pods.go:61] "metrics-server-74d5856cc6-sqmfp" [b77b51ba-e462-4f30-8063-ca70c7528e90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 00:36:28.497464   59728 system_pods.go:61] "storage-provisioner" [da85e132-ee90-421b-8e89-8804f7bb59ca] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 00:36:28.497476   59728 system_pods.go:74] duration metric: took 191.723625ms to wait for pod list to return data ...
	I1101 00:36:28.497491   59728 default_sa.go:34] waiting for default service account to be created ...
	I1101 00:36:28.683710   59728 default_sa.go:45] found service account: "default"
	I1101 00:36:28.683741   59728 default_sa.go:55] duration metric: took 186.240843ms for default service account to be created ...
	I1101 00:36:28.683760   59728 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 00:36:28.886395   59728 system_pods.go:86] 8 kube-system pods found
	I1101 00:36:28.886425   59728 system_pods.go:89] "coredns-5644d7b6d9-67f7c" [2d312387-7c72-428b-807c-3a200439f116] Running
	I1101 00:36:28.886433   59728 system_pods.go:89] "etcd-old-k8s-version-993392" [7eefc8f6-b708-4d05-849a-8d15a4cabb86] Running
	I1101 00:36:28.886440   59728 system_pods.go:89] "kube-apiserver-old-k8s-version-993392" [e646f5fb-7a3e-4db5-b5f8-d255bc946d12] Running
	I1101 00:36:28.886447   59728 system_pods.go:89] "kube-controller-manager-old-k8s-version-993392" [663a0c13-d3ae-46aa-85a7-b1cca0995a50] Running
	I1101 00:36:28.886454   59728 system_pods.go:89] "kube-proxy-6qzxd" [938e4a3a-f590-426f-9856-62d7307d3d75] Running
	I1101 00:36:28.886460   59728 system_pods.go:89] "kube-scheduler-old-k8s-version-993392" [315d1110-59c8-4133-abab-65aba4e1304c] Running
	I1101 00:36:28.886470   59728 system_pods.go:89] "metrics-server-74d5856cc6-sqmfp" [b77b51ba-e462-4f30-8063-ca70c7528e90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 00:36:28.886484   59728 system_pods.go:89] "storage-provisioner" [da85e132-ee90-421b-8e89-8804f7bb59ca] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 00:36:28.886512   59728 system_pods.go:126] duration metric: took 202.730681ms to wait for k8s-apps to be running ...
	I1101 00:36:28.886529   59728 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 00:36:28.886577   59728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:36:28.905327   59728 system_svc.go:56] duration metric: took 18.788907ms WaitForService to wait for kubelet.
	I1101 00:36:28.905360   59728 kubeadm.go:581] duration metric: took 40.389264174s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 00:36:28.905387   59728 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:36:29.083994   59728 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1101 00:36:29.084048   59728 node_conditions.go:123] node cpu capacity is 2
	I1101 00:36:29.084064   59728 node_conditions.go:105] duration metric: took 178.670245ms to run NodePressure ...
	I1101 00:36:29.084077   59728 start.go:228] waiting for startup goroutines ...
	I1101 00:36:29.084091   59728 start.go:233] waiting for cluster config update ...
	I1101 00:36:29.084104   59728 start.go:242] writing updated cluster config ...
	I1101 00:36:29.084465   59728 ssh_runner.go:195] Run: rm -f paused
	I1101 00:36:29.148676   59728 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1101 00:36:29.150575   59728 out.go:177] 
	W1101 00:36:29.152169   59728 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1101 00:36:29.154082   59728 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1101 00:36:29.155666   59728 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-993392" cluster and "default" namespace by default
	I1101 00:36:24.385829   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:24.385906   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:24.398496   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:24.885050   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:24.885133   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:24.900751   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:25.385385   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:25.385463   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:25.402690   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:25.884940   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:25.885050   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:25.900464   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:26.385511   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:26.385610   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:26.400532   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:26.885055   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:26.885130   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:26.897170   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:27.385719   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:27.385839   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:27.398387   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:27.885951   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:27.886055   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:27.899291   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:28.385791   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:28.385891   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:28.398255   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:28.885350   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:28.885434   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:28.899137   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:26.643523   60028 addons.go:502] enable addons completed in 3.031483911s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1101 00:36:27.402310   60028 pod_ready.go:102] pod "coredns-5dd5756b68-fgw5x" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:29.407177   60028 pod_ready.go:102] pod "coredns-5dd5756b68-fgw5x" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:28.860036   59907 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:30.860771   59907 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:32.862309   59907 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:29.385263   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:29.385354   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:29.401511   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:29.885019   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:29.885083   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:29.898266   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:30.385866   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:30.385984   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:30.399012   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:30.885612   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:30.885675   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:30.898251   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:31.385292   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:31.385393   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:31.398443   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:31.884972   60145 api_server.go:166] Checking apiserver status ...
	I1101 00:36:31.885064   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 00:36:31.896972   60145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 00:36:32.352824   60145 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1101 00:36:32.352860   60145 kubeadm.go:1128] stopping kube-system containers ...
	I1101 00:36:32.352931   60145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 00:36:32.383101   60145 docker.go:470] Stopping containers: [3c8e317b8906 f89000827997 835e0cd6a7db e54445b0c243 401f5ddc9cae 185692c0192f e03544bf5700 1feed6927cce aa848fdcb37e cbb6dd19f364 fa7faffc273f d6ff8a0cf578 e417aa7e7e41 35555a6414e8 3c91f1f98bfe]
	I1101 00:36:32.383208   60145 ssh_runner.go:195] Run: docker stop 3c8e317b8906 f89000827997 835e0cd6a7db e54445b0c243 401f5ddc9cae 185692c0192f e03544bf5700 1feed6927cce aa848fdcb37e cbb6dd19f364 fa7faffc273f d6ff8a0cf578 e417aa7e7e41 35555a6414e8 3c91f1f98bfe
	I1101 00:36:32.410698   60145 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 00:36:32.427544   60145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 00:36:32.439408   60145 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:36:32.439503   60145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 00:36:32.451220   60145 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 00:36:32.451252   60145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:36:32.587043   60145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:36:33.739614   60145 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.152532654s)
	I1101 00:36:33.739655   60145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:36:33.939804   60145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:36:34.012824   60145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 00:36:34.088818   60145 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:36:34.088902   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:34.107298   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:30.902433   60028 pod_ready.go:92] pod "coredns-5dd5756b68-fgw5x" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:30.902453   60028 pod_ready.go:81] duration metric: took 5.854605964s waiting for pod "coredns-5dd5756b68-fgw5x" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:30.902468   60028 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:30.907782   60028 pod_ready.go:92] pod "etcd-default-k8s-diff-port-195256" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:30.907820   60028 pod_ready.go:81] duration metric: took 5.327979ms waiting for pod "etcd-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:30.907834   60028 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:30.912538   60028 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-195256" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:30.912556   60028 pod_ready.go:81] duration metric: took 4.713118ms waiting for pod "kube-apiserver-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:30.912566   60028 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:30.994402   60028 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-195256" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:30.994429   60028 pod_ready.go:81] duration metric: took 81.854267ms waiting for pod "kube-controller-manager-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:30.994443   60028 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4g4mh" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:31.395760   60028 pod_ready.go:92] pod "kube-proxy-4g4mh" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:31.395791   60028 pod_ready.go:81] duration metric: took 401.339054ms waiting for pod "kube-proxy-4g4mh" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:31.395804   60028 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:31.795362   60028 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-195256" in "kube-system" namespace has status "Ready":"True"
	I1101 00:36:31.795394   60028 pod_ready.go:81] duration metric: took 399.580419ms waiting for pod "kube-scheduler-default-k8s-diff-port-195256" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:31.795408   60028 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-t5dsp" in "kube-system" namespace to be "Ready" ...
	I1101 00:36:34.108482   60028 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t5dsp" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:35.362678   59907 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:37.364696   59907 pod_ready.go:102] pod "metrics-server-57f55c9bc5-25jvq" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:34.620201   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:35.119766   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:35.619737   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:36.120192   60145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:36:36.153782   60145 api_server.go:72] duration metric: took 2.06496233s to wait for apiserver process to appear ...
	I1101 00:36:36.153820   60145 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:36:36.153843   60145 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 00:36:36.154424   60145 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1101 00:36:36.154481   60145 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 00:36:36.154930   60145 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1101 00:36:36.655781   60145 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 00:36:36.615308   60028 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t5dsp" in "kube-system" namespace has status "Ready":"False"
	I1101 00:36:39.112711   60028 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t5dsp" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-11-01 00:35:06 UTC, ends at Wed 2023-11-01 00:36:40 UTC. --
	Nov 01 00:36:19 old-k8s-version-993392 dockerd[1193]: time="2023-11-01T00:36:19.329221678Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 01 00:36:19 old-k8s-version-993392 dockerd[1193]: time="2023-11-01T00:36:19.356910357Z" level=info msg="ignoring event" container=21e6e128ea948820d18eca4cd5dc0a7078f04b0fe4addd42b7b21b124f8b8b51 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 00:36:19 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:19.357887018Z" level=info msg="shim disconnected" id=21e6e128ea948820d18eca4cd5dc0a7078f04b0fe4addd42b7b21b124f8b8b51 namespace=moby
	Nov 01 00:36:19 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:19.358033102Z" level=warning msg="cleaning up after shim disconnected" id=21e6e128ea948820d18eca4cd5dc0a7078f04b0fe4addd42b7b21b124f8b8b51 namespace=moby
	Nov 01 00:36:19 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:19.358083833Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 01 00:36:19 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:19.892984589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 01 00:36:19 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:19.893414827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:36:19 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:19.893497649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 01 00:36:19 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:19.893524726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:36:20 old-k8s-version-993392 dockerd[1193]: time="2023-11-01T00:36:20.517537499Z" level=info msg="ignoring event" container=4ab0e3586b932accbec65f9e69e80d313308b1e7bfc4741c77accdc54dcd250f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 00:36:20 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:20.518308617Z" level=info msg="shim disconnected" id=4ab0e3586b932accbec65f9e69e80d313308b1e7bfc4741c77accdc54dcd250f namespace=moby
	Nov 01 00:36:20 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:20.518422305Z" level=warning msg="cleaning up after shim disconnected" id=4ab0e3586b932accbec65f9e69e80d313308b1e7bfc4741c77accdc54dcd250f namespace=moby
	Nov 01 00:36:20 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:20.518440328Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 01 00:36:32 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:32.346509524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 01 00:36:32 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:32.347310631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:36:32 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:32.347419644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 01 00:36:32 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:32.347482351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:36:35 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:35.417057057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 01 00:36:35 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:35.417135622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:36:35 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:35.417158796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 01 00:36:35 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:35.417236488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 01 00:36:36 old-k8s-version-993392 dockerd[1193]: time="2023-11-01T00:36:36.022743875Z" level=info msg="ignoring event" container=59bef961f0c2dc39899c40d5253e681864b2858e32da1d85a84b33c8f3ce44b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 00:36:36 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:36.023126700Z" level=info msg="shim disconnected" id=59bef961f0c2dc39899c40d5253e681864b2858e32da1d85a84b33c8f3ce44b5 namespace=moby
	Nov 01 00:36:36 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:36.023638189Z" level=warning msg="cleaning up after shim disconnected" id=59bef961f0c2dc39899c40d5253e681864b2858e32da1d85a84b33c8f3ce44b5 namespace=moby
	Nov 01 00:36:36 old-k8s-version-993392 dockerd[1199]: time="2023-11-01T00:36:36.023813586Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                         COMMAND                  CREATED              STATUS                            PORTS     NAMES
	59bef961f0c2   a90209bb39e3                  "nginx -g 'daemon of…"   6 seconds ago        Exited (1) 5 seconds ago                    k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-7ccrz_kubernetes-dashboard_7035f6e5-2eee-45a4-b122-c29d94bb9207_2
	6afcb3a063fb   6e38f40d628d                  "/storage-provisioner"   9 seconds ago        Up 8 seconds                                k8s_storage-provisioner_storage-provisioner_kube-system_da85e132-ee90-421b-8e89-8804f7bb59ca_2
	7e8d46bf3c60   kubernetesui/dashboard        "/dashboard --insecu…"   29 seconds ago       Up 28 seconds                               k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-98lbc_kubernetes-dashboard_6992a869-7788-49b9-8fc4-9302292db59f_0
	1ba94105da01   k8s.gcr.io/pause:3.1          "/pause"                 37 seconds ago       Up 36 seconds                               k8s_POD_kubernetes-dashboard-84b68f675b-98lbc_kubernetes-dashboard_6992a869-7788-49b9-8fc4-9302292db59f_0
	ba0e892c7b98   k8s.gcr.io/pause:3.1          "/pause"                 37 seconds ago       Up 36 seconds                               k8s_POD_dashboard-metrics-scraper-d6b4b5544-7ccrz_kubernetes-dashboard_7035f6e5-2eee-45a4-b122-c29d94bb9207_0
	c3a7db1264d0   k8s.gcr.io/pause:3.1          "/pause"                 38 seconds ago       Up 37 seconds                               k8s_POD_metrics-server-74d5856cc6-sqmfp_kube-system_b77b51ba-e462-4f30-8063-ca70c7528e90_0
	fad9e0bf8c7e   56cc512116c8                  "sleep 3600"             54 seconds ago       Up 53 seconds                               k8s_busybox_busybox_default_2a145c06-0f3c-49a5-826d-94480900b4af_1
	8d704f46a154   c21b0c7400f9                  "/usr/local/bin/kube…"   54 seconds ago       Up 53 seconds                               k8s_kube-proxy_kube-proxy-6qzxd_kube-system_938e4a3a-f590-426f-9856-62d7307d3d75_1
	cd4dd3074c38   bf261d157914                  "/coredns -conf /etc…"   54 seconds ago       Up 53 seconds                               k8s_coredns_coredns-5644d7b6d9-67f7c_kube-system_2d312387-7c72-428b-807c-3a200439f116_1
	05e6fe89c1e4   k8s.gcr.io/pause:3.1          "/pause"                 55 seconds ago       Up 53 seconds                               k8s_POD_kube-proxy-6qzxd_kube-system_938e4a3a-f590-426f-9856-62d7307d3d75_1
	6f1d3748fb63   k8s.gcr.io/pause:3.1          "/pause"                 55 seconds ago       Up 53 seconds                               k8s_POD_busybox_default_2a145c06-0f3c-49a5-826d-94480900b4af_1
	187877149d96   6e38f40d628d                  "/storage-provisioner"   55 seconds ago       Exited (1) 23 seconds ago                   k8s_storage-provisioner_storage-provisioner_kube-system_da85e132-ee90-421b-8e89-8804f7bb59ca_1
	532a997db590   k8s.gcr.io/pause:3.1          "/pause"                 55 seconds ago       Up 53 seconds                               k8s_POD_coredns-5644d7b6d9-67f7c_kube-system_2d312387-7c72-428b-807c-3a200439f116_1
	270924596a79   k8s.gcr.io/pause:3.1          "/pause"                 55 seconds ago       Up 54 seconds                               k8s_POD_storage-provisioner_kube-system_da85e132-ee90-421b-8e89-8804f7bb59ca_1
	7a1c1e81e44c   06a629a7e51c                  "kube-controller-man…"   About a minute ago   Up About a minute                           k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-993392_kube-system_b39706a67360d65bfa3cf2560791efe9_0
	c708bc03f294   b2756210eeab                  "etcd --advertise-cl…"   About a minute ago   Up About a minute                           k8s_etcd_etcd-old-k8s-version-993392_kube-system_679c2a9c61663e11627b17300fa2de58_1
	ef004c92311e   301ddc62b80b                  "kube-scheduler --au…"   About a minute ago   Up About a minute                           k8s_kube-scheduler_kube-scheduler-old-k8s-version-993392_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_1
	1359bc383bfe   b305571ca60a                  "kube-apiserver --ad…"   About a minute ago   Up About a minute                           k8s_kube-apiserver_kube-apiserver-old-k8s-version-993392_kube-system_76fc4f862b1b155f977dd622de8fefee_1
	e492af6570bc   k8s.gcr.io/pause:3.1          "/pause"                 About a minute ago   Up About a minute                           k8s_POD_kube-scheduler-old-k8s-version-993392_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_1
	2aa0ddf5668a   k8s.gcr.io/pause:3.1          "/pause"                 About a minute ago   Up About a minute                           k8s_POD_kube-controller-manager-old-k8s-version-993392_kube-system_b39706a67360d65bfa3cf2560791efe9_0
	384a3e00be69   k8s.gcr.io/pause:3.1          "/pause"                 About a minute ago   Up About a minute                           k8s_POD_kube-apiserver-old-k8s-version-993392_kube-system_76fc4f862b1b155f977dd622de8fefee_1
	cb932e700219   k8s.gcr.io/pause:3.1          "/pause"                 About a minute ago   Up About a minute                           k8s_POD_etcd-old-k8s-version-993392_kube-system_679c2a9c61663e11627b17300fa2de58_1
	157ab5698f44   gcr.io/k8s-minikube/busybox   "sleep 3600"             2 minutes ago        Exited (137) About a minute ago             k8s_busybox_busybox_default_2a145c06-0f3c-49a5-826d-94480900b4af_0
	80f4711ac732   k8s.gcr.io/pause:3.1          "/pause"                 2 minutes ago        Exited (0) About a minute ago               k8s_POD_busybox_default_2a145c06-0f3c-49a5-826d-94480900b4af_0
	1681355536d2   bf261d157914                  "/coredns -conf /etc…"   3 minutes ago        Exited (0) About a minute ago               k8s_coredns_coredns-5644d7b6d9-67f7c_kube-system_2d312387-7c72-428b-807c-3a200439f116_0
	ee0792901fa8   c21b0c7400f9                  "/usr/local/bin/kube…"   3 minutes ago        Exited (2) About a minute ago               k8s_kube-proxy_kube-proxy-6qzxd_kube-system_938e4a3a-f590-426f-9856-62d7307d3d75_0
	5a44ee4d63c5   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago        Exited (0) About a minute ago               k8s_POD_coredns-5644d7b6d9-67f7c_kube-system_2d312387-7c72-428b-807c-3a200439f116_0
	7db6b38a93c6   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago        Exited (0) About a minute ago               k8s_POD_kube-proxy-6qzxd_kube-system_938e4a3a-f590-426f-9856-62d7307d3d75_0
	dd5ef506a5c9   b2756210eeab                  "etcd --advertise-cl…"   3 minutes ago        Exited (0) About a minute ago               k8s_etcd_etcd-old-k8s-version-993392_kube-system_679c2a9c61663e11627b17300fa2de58_0
	444c0ced130a   301ddc62b80b                  "kube-scheduler --au…"   3 minutes ago        Exited (2) About a minute ago               k8s_kube-scheduler_kube-scheduler-old-k8s-version-993392_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	a28712848ac8   b305571ca60a                  "kube-apiserver --ad…"   3 minutes ago        Exited (0) About a minute ago               k8s_kube-apiserver_kube-apiserver-old-k8s-version-993392_kube-system_76fc4f862b1b155f977dd622de8fefee_0
	1172eb49ab03   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago        Exited (0) About a minute ago               k8s_POD_etcd-old-k8s-version-993392_kube-system_679c2a9c61663e11627b17300fa2de58_0
	649abe186bed   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago        Exited (0) About a minute ago               k8s_POD_kube-scheduler-old-k8s-version-993392_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	4cd70b650b68   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago        Exited (0) About a minute ago               k8s_POD_kube-apiserver-old-k8s-version-993392_kube-system_76fc4f862b1b155f977dd622de8fefee_0
	time="2023-11-01T00:36:41Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [1681355536d2] <==
	* E1101 00:33:45.600330       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:34:42.233092       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=492&timeout=5m11s&timeoutSeconds=311&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E1101 00:34:42.233184       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=7m22s&timeoutSeconds=442&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E1101 00:34:42.233215       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=491&timeout=9m54s&timeoutSeconds=594&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E1101 00:33:45.599331       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:33:45.599331       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1101 00:33:45.599498       1 trace.go:82] Trace[1490166853]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-11-01 00:33:15.598900811 +0000 UTC m=+0.031849993) (total time: 30.000556317s):
	Trace[1490166853]: [30.000556317s] [30.000556317s] END
	E1101 00:33:45.600330       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:33:45.600330       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:33:45.600330       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	2023-11-01T00:33:57.860Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	2023-11-01T00:33:57.890Z [INFO] 127.0.0.1:35235 - 52401 "HINFO IN 1451251073064715437.1744364638403770540. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031246946s
	E1101 00:34:42.233092       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=492&timeout=5m11s&timeoutSeconds=311&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E1101 00:34:42.233092       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=492&timeout=5m11s&timeoutSeconds=311&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E1101 00:34:42.233092       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=492&timeout=5m11s&timeoutSeconds=311&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E1101 00:34:42.233184       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=7m22s&timeoutSeconds=442&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E1101 00:34:42.233184       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=7m22s&timeoutSeconds=442&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E1101 00:34:42.233184       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=7m22s&timeoutSeconds=442&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E1101 00:34:42.233215       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=491&timeout=9m54s&timeoutSeconds=594&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E1101 00:34:42.233215       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=491&timeout=9m54s&timeoutSeconds=594&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E1101 00:34:42.233215       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=491&timeout=9m54s&timeoutSeconds=594&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	
	* 
	* ==> coredns [cd4dd3074c38] <==
	* 2023-11-01T00:35:52.539Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-11-01T00:35:52.559Z [INFO] 127.0.0.1:48153 - 55771 "HINFO IN 1218923417123665864.7994173543421600400. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020338027s
	2023-11-01T00:35:57.206Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-11-01T00:36:07.205Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-11-01T00:36:17.205Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	I1101 00:36:17.538209       1 trace.go:82] Trace[1113987581]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-11-01 00:35:47.537097018 +0000 UTC m=+0.022777054) (total time: 30.00104518s):
	Trace[1113987581]: [30.00104518s] [30.00104518s] END
	I1101 00:36:17.538370       1 trace.go:82] Trace[1185397848]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-11-01 00:35:47.537469716 +0000 UTC m=+0.023149800) (total time: 30.000882595s):
	Trace[1185397848]: [30.000882595s] [30.000882595s] END
	E1101 00:36:17.538531       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:36:17.538531       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:36:17.538531       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:36:17.538535       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:36:17.538535       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:36:17.538535       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1101 00:36:17.538220       1 trace.go:82] Trace[389308180]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-11-01 00:35:47.5377863 +0000 UTC m=+0.023466351) (total time: 30.000407809s):
	Trace[389308180]: [30.000407809s] [30.000407809s] END
	E1101 00:36:17.538639       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:36:17.538639       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:36:17.538639       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:36:17.538531       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:36:17.538535       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1101 00:36:17.538639       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-993392
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-993392
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=old-k8s-version-993392
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T00_32_58_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:32:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:35:45 +0000   Wed, 01 Nov 2023 00:32:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:35:45 +0000   Wed, 01 Nov 2023 00:32:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:35:45 +0000   Wed, 01 Nov 2023 00:32:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:35:45 +0000   Wed, 01 Nov 2023 00:32:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    old-k8s-version-993392
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 8af1de78e49747bdb1d57fda7118a553
	 System UUID:                8af1de78-e497-47bd-b1d5-7fda7118a553
	 Boot ID:                    5cdb5c4c-6ca5-4431-ba9e-afda7d988f49
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (11 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  kube-system                coredns-5644d7b6d9-67f7c                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m28s
	  kube-system                etcd-old-k8s-version-993392                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                kube-apiserver-old-k8s-version-993392             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                kube-controller-manager-old-k8s-version-993392    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                kube-proxy-6qzxd                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                kube-scheduler-old-k8s-version-993392             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                metrics-server-74d5856cc6-sqmfp                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         39s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-7ccrz         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-98lbc             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m55s)  kubelet, old-k8s-version-993392     Node old-k8s-version-993392 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x7 over 3m55s)  kubelet, old-k8s-version-993392     Node old-k8s-version-993392 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x8 over 3m55s)  kubelet, old-k8s-version-993392     Node old-k8s-version-993392 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m26s                  kube-proxy, old-k8s-version-993392  Starting kube-proxy.
	  Normal  Starting                 64s                    kubelet, old-k8s-version-993392     Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)      kubelet, old-k8s-version-993392     Node old-k8s-version-993392 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet, old-k8s-version-993392     Node old-k8s-version-993392 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x7 over 64s)      kubelet, old-k8s-version-993392     Node old-k8s-version-993392 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  64s                    kubelet, old-k8s-version-993392     Updated Node Allocatable limit across pods
	  Normal  Starting                 54s                    kube-proxy, old-k8s-version-993392  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov 1 00:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062460] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Nov 1 00:35] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.890898] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140224] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.605126] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.297407] systemd-fstab-generator[511]: Ignoring "noauto" for root device
	[  +0.113515] systemd-fstab-generator[522]: Ignoring "noauto" for root device
	[  +1.152415] systemd-fstab-generator[882]: Ignoring "noauto" for root device
	[  +0.282275] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.120748] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.124790] systemd-fstab-generator[945]: Ignoring "noauto" for root device
	[  +5.785891] systemd-fstab-generator[1184]: Ignoring "noauto" for root device
	[  +1.840695] kauditd_printk_skb: 67 callbacks suppressed
	[ +13.095985] systemd-fstab-generator[1657]: Ignoring "noauto" for root device
	[  +0.473117] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.167522] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 1 00:36] kauditd_printk_skb: 10 callbacks suppressed
	
	* 
	* ==> etcd [c708bc03f294] <==
	* 2023-11-01 00:35:39.986633 I | etcdserver: election = 1000ms
	2023-11-01 00:35:39.986750 I | etcdserver: snapshot count = 10000
	2023-11-01 00:35:39.986889 I | etcdserver: advertise client URLs = https://192.168.39.70:2379
	2023-11-01 00:35:40.326533 I | etcdserver: restarting member d9e0442f914d2c09 in cluster b9ca18127a3e3182 at commit index 533
	2023-11-01 00:35:40.367041 I | raft: d9e0442f914d2c09 became follower at term 2
	2023-11-01 00:35:40.367463 I | raft: newRaft d9e0442f914d2c09 [peers: [], term: 2, commit: 533, applied: 0, lastindex: 533, lastterm: 2]
	2023-11-01 00:35:40.377631 W | auth: simple token is not cryptographically signed
	2023-11-01 00:35:40.380482 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-01 00:35:40.382425 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-01 00:35:40.382835 I | embed: listening for metrics on http://192.168.39.70:2381
	2023-11-01 00:35:40.383166 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-01 00:35:40.383593 I | etcdserver/membership: added member d9e0442f914d2c09 [https://192.168.39.70:2380] to cluster b9ca18127a3e3182
	2023-11-01 00:35:40.383874 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-01 00:35:40.384025 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-01 00:35:41.468334 I | raft: d9e0442f914d2c09 is starting a new election at term 2
	2023-11-01 00:35:41.468376 I | raft: d9e0442f914d2c09 became candidate at term 3
	2023-11-01 00:35:41.468393 I | raft: d9e0442f914d2c09 received MsgVoteResp from d9e0442f914d2c09 at term 3
	2023-11-01 00:35:41.468405 I | raft: d9e0442f914d2c09 became leader at term 3
	2023-11-01 00:35:41.468410 I | raft: raft.node: d9e0442f914d2c09 elected leader d9e0442f914d2c09 at term 3
	2023-11-01 00:35:41.468896 I | etcdserver: published {Name:old-k8s-version-993392 ClientURLs:[https://192.168.39.70:2379]} to cluster b9ca18127a3e3182
	2023-11-01 00:35:41.469285 I | embed: ready to serve client requests
	2023-11-01 00:35:41.469432 I | embed: ready to serve client requests
	2023-11-01 00:35:41.470443 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-01 00:35:41.470900 I | embed: serving client requests on 192.168.39.70:2379
	2023-11-01 00:36:11.351872 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-67f7c\" " with result "range_response_count:1 size:2003" took too long (182.311214ms) to execute
	
	* 
	* ==> etcd [dd5ef506a5c9] <==
	* 2023-11-01 00:32:50.238902 I | embed: ready to serve client requests
	2023-11-01 00:32:50.239660 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-01 00:32:50.239786 I | embed: ready to serve client requests
	2023-11-01 00:32:50.240689 I | embed: serving client requests on 192.168.39.70:2379
	2023-11-01 00:32:50.242894 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-01 00:32:50.276086 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-01 00:32:50.403573 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-01 00:32:50.403981 W | etcdserver: request "ID:3173220829967541508 Method:\"PUT\" Path:\"/0/version\" Val:\"3.3.0\" " with result "" took too long (127.75279ms) to execute
	2023-11-01 00:32:50.404208 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 " with result "range_response_count:0 size:4" took too long (124.749522ms) to execute
	2023-11-01 00:32:50.404593 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitiont\" count_only:true " with result "range_response_count:0 size:4" took too long (124.738513ms) to execute
	2023-11-01 00:32:50.407585 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitiont\" count_only:true " with result "range_response_count:0 size:4" took too long (116.130252ms) to execute
	2023-11-01 00:32:50.407959 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 " with result "range_response_count:0 size:4" took too long (116.598039ms) to execute
	2023-11-01 00:33:06.825928 W | etcdserver: request "header:<ID:3173220829967542152 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.70\" mod_revision:149 > success:<request_put:<key:\"/registry/masterleases/192.168.39.70\" value_size:68 lease:3173220829967542150 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.70\" > >>" with result "size:16" took too long (134.526854ms) to execute
	2023-11-01 00:33:06.826744 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (186.786305ms) to execute
	2023-11-01 00:33:11.976870 W | etcdserver: request "header:<ID:3173220829967542270 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kube-system/pvc-protection-controller-token-gw7gv\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kube-system/pvc-protection-controller-token-gw7gv\" value_size:2461 >> failure:<>>" with result "size:16" took too long (214.074961ms) to execute
	2023-11-01 00:33:15.171855 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (265.747242ms) to execute
	2023-11-01 00:33:15.172039 W | etcdserver: request "header:<ID:3173220829967542379 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/configmaps/kube-system/coredns\" mod_revision:195 > success:<request_put:<key:\"/registry/configmaps/kube-system/coredns\" value_size:491 >> failure:<request_range:<key:\"/registry/configmaps/kube-system/coredns\" > >>" with result "size:16" took too long (199.022841ms) to execute
	2023-11-01 00:33:17.978444 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-6qzxd\" " with result "range_response_count:1 size:2165" took too long (138.933956ms) to execute
	2023-11-01 00:33:21.592007 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-67f7c\" " with result "range_response_count:1 size:1889" took too long (457.199346ms) to execute
	2023-11-01 00:33:33.407784 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:2853" took too long (248.837095ms) to execute
	2023-11-01 00:33:33.408103 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-67f7c\" " with result "range_response_count:1 size:1889" took too long (273.055056ms) to execute
	2023-11-01 00:34:03.366503 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:6 size:11632" took too long (103.746509ms) to execute
	2023-11-01 00:34:42.382550 N | pkg/osutil: received terminated signal, shutting down...
	2023-11-01 00:34:42.383806 I | etcdserver: skipped leadership transfer for single member cluster
	WARNING: 2023/11/01 00:34:42 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: operation was canceled". Reconnecting...
	
	* 
	* ==> kernel <==
	*  00:36:41 up 1 min,  0 users,  load average: 1.15, 0.48, 0.18
	Linux old-k8s-version-993392 5.10.57 #1 SMP Tue Oct 31 22:14:31 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [1359bc383bfe] <==
	* I1101 00:35:44.429265       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1101 00:35:44.430292       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
	E1101 00:35:44.448487       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.70, ResourceVersion: 0, AdditionalErrorMsg: 
	I1101 00:35:44.539590       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I1101 00:35:44.622937       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 00:35:44.633518       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 00:35:44.633968       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 00:35:44.634345       1 cache.go:39] Caches are synced for autoregister controller
	I1101 00:35:45.418044       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I1101 00:35:45.418068       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1101 00:35:45.418076       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1101 00:35:45.425153       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1101 00:35:46.122954       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1101 00:35:46.172072       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1101 00:35:46.224639       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1101 00:35:46.224828       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 00:35:46.224987       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1101 00:35:46.225029       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1101 00:35:46.331797       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1101 00:35:46.375186       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 00:35:46.390538       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 00:36:02.547630       1 controller.go:606] quota admission added evaluator for: endpoints
	I1101 00:36:02.870337       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1101 00:36:02.926348       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [a28712848ac8] <==
	* W1101 00:32:56.622840       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.39.70]
	I1101 00:32:56.623635       1 controller.go:606] quota admission added evaluator for: endpoints
	I1101 00:32:56.697669       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 00:32:57.538750       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1101 00:32:58.152698       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1101 00:32:58.469906       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1101 00:33:13.152543       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1101 00:33:13.156028       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1101 00:33:13.241814       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	E1101 00:34:41.602264       1 available_controller.go:416] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I1101 00:34:42.223787       1 controller.go:182] Shutting down kubernetes service endpoint reconciler
	I1101 00:34:42.224583       1 controller.go:122] Shutting down OpenAPI controller
	I1101 00:34:42.224601       1 autoregister_controller.go:164] Shutting down autoregister controller
	I1101 00:34:42.224632       1 available_controller.go:395] Shutting down AvailableConditionController
	I1101 00:34:42.224640       1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
	I1101 00:34:42.224730       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I1101 00:34:42.224744       1 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1101 00:34:42.224759       1 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController
	I1101 00:34:42.224771       1 establishing_controller.go:84] Shutting down EstablishingController
	I1101 00:34:42.224781       1 naming_controller.go:299] Shutting down NamingConditionController
	I1101 00:34:42.224792       1 customresource_discovery_controller.go:219] Shutting down DiscoveryController
	I1101 00:34:42.224808       1 crd_finalizer.go:286] Shutting down CRDFinalizer
	I1101 00:34:42.224967       1 controller.go:87] Shutting down OpenAPI AggregationController
	I1101 00:34:42.227346       1 secure_serving.go:167] Stopped listening on [::]:8443
	E1101 00:34:42.296176       1 controller.go:185] Get https://localhost:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:8443: connect: connection refused
	
	* 
	* ==> kube-controller-manager [7a1c1e81e44c] <==
	* I1101 00:36:02.767158       1 shared_informer.go:204] Caches are synced for namespace 
	I1101 00:36:02.792784       1 shared_informer.go:204] Caches are synced for service account 
	I1101 00:36:02.822771       1 shared_informer.go:204] Caches are synced for ReplicationController 
	I1101 00:36:02.831492       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1101 00:36:02.831534       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1101 00:36:02.843283       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I1101 00:36:02.850833       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"8d6893bb-aa11-4a8b-a95d-6e2dcaf60f3f", APIVersion:"apps/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-sqmfp
	I1101 00:36:02.866493       1 shared_informer.go:204] Caches are synced for resource quota 
	I1101 00:36:02.881188       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1101 00:36:02.903710       1 shared_informer.go:204] Caches are synced for disruption 
	I1101 00:36:02.904509       1 disruption.go:341] Sending events to api server.
	I1101 00:36:02.923733       1 shared_informer.go:204] Caches are synced for deployment 
	I1101 00:36:02.937912       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"445787e2-45c1-432e-9466-f22e10f75ee2", APIVersion:"apps/v1", ResourceVersion:"554", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-d6b4b5544 to 1
	I1101 00:36:02.938044       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"81d337ea-6bdc-4298-a875-b9f2d09c8e72", APIVersion:"apps/v1", ResourceVersion:"555", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-84b68f675b to 1
	I1101 00:36:02.969228       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"79983dff-7338-439c-b1a5-e072b8d1ab6e", APIVersion:"apps/v1", ResourceVersion:"539", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-5644d7b6d9 to 1
	I1101 00:36:02.969596       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"34c9bd3c-88bd-4f53-a34c-cd7731b193c0", APIVersion:"apps/v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-7ccrz
	I1101 00:36:02.997617       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"c2b7ec54-9031-4a44-a19e-7647d104a6b4", APIVersion:"apps/v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-98lbc
	I1101 00:36:03.007408       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"6a71933d-85a8-4b69-b8db-fd9e03fa4c90", APIVersion:"apps/v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-5644d7b6d9-kj7pf
	E1101 00:36:03.134109       1 memcache.go:199] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1101 00:36:03.184192       1 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1101 00:36:04.010730       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1101 00:36:04.010822       1 shared_informer.go:197] Waiting for caches to sync for resource quota
	I1101 00:36:04.111333       1 shared_informer.go:204] Caches are synced for resource quota 
	E1101 00:36:34.363097       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1101 00:36:34.884982       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [8d704f46a154] <==
	* W1101 00:35:47.887877       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1101 00:35:47.896360       1 node.go:135] Successfully retrieved node IP: 192.168.39.70
	I1101 00:35:47.896417       1 server_others.go:149] Using iptables Proxier.
	I1101 00:35:47.897555       1 server.go:529] Version: v1.16.0
	I1101 00:35:47.899793       1 config.go:313] Starting service config controller
	I1101 00:35:47.899815       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1101 00:35:47.901605       1 config.go:131] Starting endpoints config controller
	I1101 00:35:47.901713       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1101 00:35:48.000070       1 shared_informer.go:204] Caches are synced for service config 
	I1101 00:35:48.001925       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-proxy [ee0792901fa8] <==
	* W1101 00:33:15.663736       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1101 00:33:15.679075       1 node.go:135] Successfully retrieved node IP: 192.168.39.70
	I1101 00:33:15.679143       1 server_others.go:149] Using iptables Proxier.
	I1101 00:33:15.679923       1 server.go:529] Version: v1.16.0
	I1101 00:33:15.681571       1 config.go:131] Starting endpoints config controller
	I1101 00:33:15.681679       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1101 00:33:15.693654       1 config.go:313] Starting service config controller
	I1101 00:33:15.693712       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1101 00:33:15.787411       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1101 00:33:15.794599       1 shared_informer.go:204] Caches are synced for service config 
	E1101 00:34:42.273619       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=492&timeout=8m4s&timeoutSeconds=484&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	E1101 00:34:42.273714       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=491&timeout=5m8s&timeoutSeconds=308&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [444c0ced130a] <==
	* E1101 00:32:53.435277       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 00:32:53.435645       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 00:32:53.435904       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 00:32:54.422912       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 00:32:54.424562       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 00:32:54.426849       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 00:32:54.428258       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 00:32:54.430514       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 00:32:54.432293       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 00:32:54.434462       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 00:32:54.437070       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 00:32:54.437457       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 00:32:54.438528       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 00:32:54.441771       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 00:34:42.236022       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=6m44s&timeoutSeconds=404&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	E1101 00:34:42.236122       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=474&timeout=9m44s&timeoutSeconds=584&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	E1101 00:34:42.236257       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=8m38s&timeoutSeconds=518&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	E1101 00:34:42.236407       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=5m31s&timeoutSeconds=331&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	E1101 00:34:42.238693       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&resourceVersion=465&timeoutSeconds=310&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	E1101 00:34:42.238841       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=5m54s&timeoutSeconds=354&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	E1101 00:34:42.238901       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=1&timeout=8m34s&timeoutSeconds=514&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	E1101 00:34:42.238926       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=349&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	E1101 00:34:42.238981       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=7m42s&timeoutSeconds=462&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	E1101 00:34:42.242054       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=419&timeout=5m17s&timeoutSeconds=317&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	E1101 00:34:42.242125       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=491&timeout=7m15s&timeoutSeconds=435&watch=true: dial tcp 192.168.39.70:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [ef004c92311e] <==
	* I1101 00:35:39.996606       1 serving.go:319] Generated self-signed cert in-memory
	W1101 00:35:44.502948       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 00:35:44.502993       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 00:35:44.503003       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 00:35:44.503013       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 00:35:44.509518       1 server.go:143] Version: v1.16.0
	I1101 00:35:44.509576       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W1101 00:35:44.522497       1 authorization.go:47] Authorization is disabled
	W1101 00:35:44.522538       1 authentication.go:79] Authentication is disabled
	I1101 00:35:44.522549       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1101 00:35:44.523086       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-01 00:35:06 UTC, ends at Wed 2023-11-01 00:36:41 UTC. --
	Nov 01 00:36:04 old-k8s-version-993392 kubelet[1663]: I1101 00:36:04.274179    1663 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/329b83a0-af26-47d9-b44a-d2af4cb4abab-coredns-token-q6pcr" (OuterVolumeSpecName: "coredns-token-q6pcr") pod "329b83a0-af26-47d9-b44a-d2af4cb4abab" (UID: "329b83a0-af26-47d9-b44a-d2af4cb4abab"). InnerVolumeSpecName "coredns-token-q6pcr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 01 00:36:04 old-k8s-version-993392 kubelet[1663]: I1101 00:36:04.358751    1663 reconciler.go:301] Volume detached for volume "config-volume" (UniqueName: "kubernetes.io/configmap/329b83a0-af26-47d9-b44a-d2af4cb4abab-config-volume") on node "old-k8s-version-993392" DevicePath ""
	Nov 01 00:36:04 old-k8s-version-993392 kubelet[1663]: I1101 00:36:04.358801    1663 reconciler.go:301] Volume detached for volume "coredns-token-q6pcr" (UniqueName: "kubernetes.io/secret/329b83a0-af26-47d9-b44a-d2af4cb4abab-coredns-token-q6pcr") on node "old-k8s-version-993392" DevicePath ""
	Nov 01 00:36:04 old-k8s-version-993392 kubelet[1663]: W1101 00:36:04.721721    1663 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-84b68f675b-98lbc through plugin: invalid network status for
	Nov 01 00:36:05 old-k8s-version-993392 kubelet[1663]: W1101 00:36:05.034174    1663 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-7ccrz through plugin: invalid network status for
	Nov 01 00:36:05 old-k8s-version-993392 kubelet[1663]: W1101 00:36:05.168916    1663 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-84b68f675b-98lbc through plugin: invalid network status for
	Nov 01 00:36:05 old-k8s-version-993392 kubelet[1663]: W1101 00:36:05.183826    1663 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-7ccrz through plugin: invalid network status for
	Nov 01 00:36:12 old-k8s-version-993392 kubelet[1663]: W1101 00:36:12.347926    1663 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-84b68f675b-98lbc through plugin: invalid network status for
	Nov 01 00:36:13 old-k8s-version-993392 kubelet[1663]: W1101 00:36:13.529890    1663 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-84b68f675b-98lbc through plugin: invalid network status for
	Nov 01 00:36:17 old-k8s-version-993392 kubelet[1663]: E1101 00:36:17.614881    1663 pod_workers.go:191] Error syncing pod da85e132-ee90-421b-8e89-8804f7bb59ca ("storage-provisioner_kube-system(da85e132-ee90-421b-8e89-8804f7bb59ca)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(da85e132-ee90-421b-8e89-8804f7bb59ca)"
	Nov 01 00:36:19 old-k8s-version-993392 kubelet[1663]: E1101 00:36:19.330488    1663 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 01 00:36:19 old-k8s-version-993392 kubelet[1663]: E1101 00:36:19.331292    1663 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 01 00:36:19 old-k8s-version-993392 kubelet[1663]: E1101 00:36:19.332508    1663 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 01 00:36:19 old-k8s-version-993392 kubelet[1663]: E1101 00:36:19.332758    1663 pod_workers.go:191] Error syncing pod b77b51ba-e462-4f30-8063-ca70c7528e90 ("metrics-server-74d5856cc6-sqmfp_kube-system(b77b51ba-e462-4f30-8063-ca70c7528e90)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 01 00:36:19 old-k8s-version-993392 kubelet[1663]: W1101 00:36:19.667281    1663 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-7ccrz through plugin: invalid network status for
	Nov 01 00:36:20 old-k8s-version-993392 kubelet[1663]: W1101 00:36:20.696731    1663 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-7ccrz through plugin: invalid network status for
	Nov 01 00:36:20 old-k8s-version-993392 kubelet[1663]: E1101 00:36:20.712008    1663 pod_workers.go:191] Error syncing pod 7035f6e5-2eee-45a4-b122-c29d94bb9207 ("dashboard-metrics-scraper-d6b4b5544-7ccrz_kubernetes-dashboard(7035f6e5-2eee-45a4-b122-c29d94bb9207)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-7ccrz_kubernetes-dashboard(7035f6e5-2eee-45a4-b122-c29d94bb9207)"
	Nov 01 00:36:21 old-k8s-version-993392 kubelet[1663]: W1101 00:36:21.732446    1663 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-7ccrz through plugin: invalid network status for
	Nov 01 00:36:21 old-k8s-version-993392 kubelet[1663]: E1101 00:36:21.743879    1663 pod_workers.go:191] Error syncing pod 7035f6e5-2eee-45a4-b122-c29d94bb9207 ("dashboard-metrics-scraper-d6b4b5544-7ccrz_kubernetes-dashboard(7035f6e5-2eee-45a4-b122-c29d94bb9207)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-7ccrz_kubernetes-dashboard(7035f6e5-2eee-45a4-b122-c29d94bb9207)"
	Nov 01 00:36:23 old-k8s-version-993392 kubelet[1663]: E1101 00:36:23.579444    1663 pod_workers.go:191] Error syncing pod 7035f6e5-2eee-45a4-b122-c29d94bb9207 ("dashboard-metrics-scraper-d6b4b5544-7ccrz_kubernetes-dashboard(7035f6e5-2eee-45a4-b122-c29d94bb9207)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-7ccrz_kubernetes-dashboard(7035f6e5-2eee-45a4-b122-c29d94bb9207)"
	Nov 01 00:36:33 old-k8s-version-993392 kubelet[1663]: E1101 00:36:33.290727    1663 pod_workers.go:191] Error syncing pod b77b51ba-e462-4f30-8063-ca70c7528e90 ("metrics-server-74d5856cc6-sqmfp_kube-system(b77b51ba-e462-4f30-8063-ca70c7528e90)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 01 00:36:35 old-k8s-version-993392 kubelet[1663]: W1101 00:36:35.949638    1663 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-7ccrz through plugin: invalid network status for
	Nov 01 00:36:37 old-k8s-version-993392 kubelet[1663]: W1101 00:36:37.002233    1663 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-7ccrz through plugin: invalid network status for
	Nov 01 00:36:37 old-k8s-version-993392 kubelet[1663]: E1101 00:36:37.011872    1663 pod_workers.go:191] Error syncing pod 7035f6e5-2eee-45a4-b122-c29d94bb9207 ("dashboard-metrics-scraper-d6b4b5544-7ccrz_kubernetes-dashboard(7035f6e5-2eee-45a4-b122-c29d94bb9207)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-7ccrz_kubernetes-dashboard(7035f6e5-2eee-45a4-b122-c29d94bb9207)"
	Nov 01 00:36:38 old-k8s-version-993392 kubelet[1663]: W1101 00:36:38.023511    1663 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-7ccrz through plugin: invalid network status for
	
	* 
	* ==> kubernetes-dashboard [7e8d46bf3c60] <==
	* 2023/11/01 00:36:12 Starting overwatch
	2023/11/01 00:36:12 Using namespace: kubernetes-dashboard
	2023/11/01 00:36:12 Using in-cluster config to connect to apiserver
	2023/11/01 00:36:12 Using secret token for csrf signing
	2023/11/01 00:36:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/11/01 00:36:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/11/01 00:36:12 Successful initial request to the apiserver, version: v1.16.0
	2023/11/01 00:36:12 Generating JWE encryption key
	2023/11/01 00:36:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/11/01 00:36:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/11/01 00:36:12 Initializing JWE encryption key from synchronized object
	2023/11/01 00:36:12 Creating in-cluster Sidecar client
	2023/11/01 00:36:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/11/01 00:36:12 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [187877149d96] <==
	* I1101 00:35:47.086709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 00:36:17.099412       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [6afcb3a063fb] <==
	* I1101 00:36:32.467270       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 00:36:32.488463       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 00:36:32.489487       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-993392 -n old-k8s-version-993392
E1101 00:36:42.490740   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-993392 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-sqmfp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-993392 describe pod metrics-server-74d5856cc6-sqmfp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-993392 describe pod metrics-server-74d5856cc6-sqmfp: exit status 1 (96.437584ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-sqmfp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-993392 describe pod metrics-server-74d5856cc6-sqmfp: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (3.41s)

                                                
                                    

Test pass (288/321)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.93
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.3/json-events 4.74
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.14
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
19 TestBinaryMirror 0.57
20 TestOffline 70.67
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 150.5
27 TestAddons/parallel/Registry 17.44
28 TestAddons/parallel/Ingress 22.61
29 TestAddons/parallel/InspektorGadget 10.73
30 TestAddons/parallel/MetricsServer 6.16
31 TestAddons/parallel/HelmTiller 20.44
33 TestAddons/parallel/CSI 65.81
34 TestAddons/parallel/Headlamp 18.15
35 TestAddons/parallel/CloudSpanner 5.67
36 TestAddons/parallel/LocalPath 55.66
37 TestAddons/parallel/NvidiaDevicePlugin 5.65
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/StoppedEnableDisable 13.42
42 TestCertOptions 90.14
43 TestCertExpiration 345.7
44 TestDockerFlags 100.89
45 TestForceSystemdFlag 82.19
46 TestForceSystemdEnv 71.8
48 TestKVMDriverInstallOrUpdate 3.01
52 TestErrorSpam/setup 50.24
53 TestErrorSpam/start 0.39
54 TestErrorSpam/status 0.78
55 TestErrorSpam/pause 1.2
56 TestErrorSpam/unpause 1.36
57 TestErrorSpam/stop 4.27
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 102.77
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 36.71
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.38
69 TestFunctional/serial/CacheCmd/cache/add_local 1.36
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.24
74 TestFunctional/serial/CacheCmd/cache/delete 0.14
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 42.22
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.1
80 TestFunctional/serial/LogsFileCmd 1.09
81 TestFunctional/serial/InvalidService 4.48
83 TestFunctional/parallel/ConfigCmd 0.44
84 TestFunctional/parallel/DashboardCmd 41.09
85 TestFunctional/parallel/DryRun 0.32
86 TestFunctional/parallel/InternationalLanguage 0.16
87 TestFunctional/parallel/StatusCmd 1.15
91 TestFunctional/parallel/ServiceCmdConnect 11.54
92 TestFunctional/parallel/AddonsCmd 0.16
93 TestFunctional/parallel/PersistentVolumeClaim 57.38
95 TestFunctional/parallel/SSHCmd 0.54
96 TestFunctional/parallel/CpCmd 1.02
97 TestFunctional/parallel/MySQL 39.44
98 TestFunctional/parallel/FileSync 0.28
99 TestFunctional/parallel/CertSync 1.58
103 TestFunctional/parallel/NodeLabels 0.07
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.26
107 TestFunctional/parallel/License 0.2
108 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
109 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
110 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
111 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
112 TestFunctional/parallel/ImageCommands/ImageBuild 3.13
113 TestFunctional/parallel/ImageCommands/Setup 1.37
114 TestFunctional/parallel/DockerEnv/bash 1.16
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
118 TestFunctional/parallel/ServiceCmd/DeployApp 13.25
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.35
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.52
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.1
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.29
132 TestFunctional/parallel/Version/short 0.15
133 TestFunctional/parallel/Version/components 1.1
134 TestFunctional/parallel/ServiceCmd/List 0.34
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.39
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
138 TestFunctional/parallel/ServiceCmd/Format 0.4
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
140 TestFunctional/parallel/ProfileCmd/profile_list 0.38
141 TestFunctional/parallel/ServiceCmd/URL 0.35
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
143 TestFunctional/parallel/MountCmd/any-port 27.21
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.38
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.82
146 TestFunctional/parallel/MountCmd/specific-port 1.74
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
148 TestFunctional/delete_addon-resizer_images 0.07
149 TestFunctional/delete_my-image_image 0.02
150 TestFunctional/delete_minikube_cached_images 0.02
151 TestGvisorAddon 368.1
154 TestImageBuild/serial/Setup 51.3
155 TestImageBuild/serial/NormalBuild 1.67
156 TestImageBuild/serial/BuildWithBuildArg 1.33
157 TestImageBuild/serial/BuildWithDockerIgnore 0.39
158 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.3
161 TestIngressAddonLegacy/StartLegacyK8sCluster 78.98
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.45
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 39.42
168 TestJSONOutput/start/Command 102.21
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.57
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.52
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 8.1
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.22
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 104.69
200 TestMountStart/serial/StartWithMountFirst 28.58
201 TestMountStart/serial/VerifyMountFirst 0.39
202 TestMountStart/serial/StartWithMountSecond 30.47
203 TestMountStart/serial/VerifyMountSecond 0.41
204 TestMountStart/serial/DeleteFirst 0.68
205 TestMountStart/serial/VerifyMountPostDelete 0.42
206 TestMountStart/serial/Stop 2.09
207 TestMountStart/serial/RestartStopped 24.49
208 TestMountStart/serial/VerifyMountPostStop 0.4
211 TestMultiNode/serial/FreshStart2Nodes 131.88
212 TestMultiNode/serial/DeployApp2Nodes 5.15
213 TestMultiNode/serial/PingHostFrom2Pods 0.95
214 TestMultiNode/serial/AddNode 46.88
215 TestMultiNode/serial/ProfileList 0.22
216 TestMultiNode/serial/CopyFile 7.91
217 TestMultiNode/serial/StopNode 3.34
218 TestMultiNode/serial/StartAfterStop 31.37
219 TestMultiNode/serial/RestartKeepsNodes 185.66
220 TestMultiNode/serial/DeleteNode 1.77
221 TestMultiNode/serial/StopMultiNode 25.56
223 TestMultiNode/serial/ValidateNameConflict 52.33
228 TestPreload 176.96
230 TestScheduledStopUnix 123.3
231 TestSkaffold 141.67
234 TestRunningBinaryUpgrade 184.41
236 TestKubernetesUpgrade 219.21
249 TestStoppedBinaryUpgrade/Setup 0.38
250 TestStoppedBinaryUpgrade/Upgrade 212.85
251 TestStoppedBinaryUpgrade/MinikubeLogs 1.45
260 TestPause/serial/Start 80.03
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
263 TestNoKubernetes/serial/StartWithK8s 89.38
264 TestPause/serial/SecondStartNoReconfiguration 59.05
265 TestNetworkPlugins/group/auto/Start 82.17
266 TestNetworkPlugins/group/kindnet/Start 106.49
267 TestNoKubernetes/serial/StartWithStopK8s 36.21
268 TestPause/serial/Pause 0.83
269 TestPause/serial/VerifyStatus 0.36
270 TestPause/serial/Unpause 0.71
271 TestPause/serial/PauseAgain 0.8
272 TestPause/serial/DeletePaused 1.22
273 TestPause/serial/VerifyDeletedResources 0.4
274 TestNetworkPlugins/group/calico/Start 119.85
275 TestNoKubernetes/serial/Start 52.77
276 TestNetworkPlugins/group/auto/KubeletFlags 0.23
277 TestNetworkPlugins/group/auto/NetCatPod 12.41
278 TestNetworkPlugins/group/auto/DNS 0.21
279 TestNetworkPlugins/group/auto/Localhost 0.18
280 TestNetworkPlugins/group/auto/HairPin 0.18
281 TestNetworkPlugins/group/custom-flannel/Start 88.45
282 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
283 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
284 TestNetworkPlugins/group/kindnet/NetCatPod 13.42
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
286 TestNoKubernetes/serial/ProfileList 1.28
287 TestNoKubernetes/serial/Stop 2.29
288 TestNoKubernetes/serial/StartNoArgs 43.17
289 TestNetworkPlugins/group/kindnet/DNS 0.23
290 TestNetworkPlugins/group/kindnet/Localhost 0.18
291 TestNetworkPlugins/group/kindnet/HairPin 0.19
292 TestNetworkPlugins/group/false/Start 118.51
293 TestNetworkPlugins/group/calico/ControllerPod 5.03
294 TestNetworkPlugins/group/calico/KubeletFlags 0.24
295 TestNetworkPlugins/group/calico/NetCatPod 13.43
296 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
297 TestNetworkPlugins/group/enable-default-cni/Start 104.05
298 TestNetworkPlugins/group/calico/DNS 0.28
299 TestNetworkPlugins/group/calico/Localhost 0.2
300 TestNetworkPlugins/group/calico/HairPin 0.16
301 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
302 TestNetworkPlugins/group/custom-flannel/NetCatPod 15.57
303 TestNetworkPlugins/group/flannel/Start 111.8
304 TestNetworkPlugins/group/custom-flannel/DNS 0.22
305 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
306 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
307 TestNetworkPlugins/group/bridge/Start 101.11
308 TestNetworkPlugins/group/false/KubeletFlags 0.23
309 TestNetworkPlugins/group/false/NetCatPod 13.42
310 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
311 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.35
312 TestNetworkPlugins/group/false/DNS 0.19
313 TestNetworkPlugins/group/false/Localhost 0.2
314 TestNetworkPlugins/group/false/HairPin 0.17
315 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
316 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
317 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
318 TestNetworkPlugins/group/kubenet/Start 81.82
320 TestStartStop/group/old-k8s-version/serial/FirstStart 174.98
321 TestNetworkPlugins/group/flannel/ControllerPod 5.02
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
323 TestNetworkPlugins/group/flannel/NetCatPod 10.44
324 TestNetworkPlugins/group/flannel/DNS 0.2
325 TestNetworkPlugins/group/flannel/Localhost 0.16
326 TestNetworkPlugins/group/flannel/HairPin 0.16
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
328 TestNetworkPlugins/group/bridge/NetCatPod 14.46
329 TestNetworkPlugins/group/bridge/DNS 0.23
330 TestNetworkPlugins/group/bridge/Localhost 0.18
331 TestNetworkPlugins/group/bridge/HairPin 0.21
333 TestStartStop/group/no-preload/serial/FirstStart 135.29
335 TestStartStop/group/embed-certs/serial/FirstStart 132.48
336 TestNetworkPlugins/group/kubenet/KubeletFlags 0.22
337 TestNetworkPlugins/group/kubenet/NetCatPod 12.38
338 TestNetworkPlugins/group/kubenet/DNS 0.2
339 TestNetworkPlugins/group/kubenet/Localhost 0.19
340 TestNetworkPlugins/group/kubenet/HairPin 0.22
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 73.62
343 TestStartStop/group/old-k8s-version/serial/DeployApp 8.56
344 TestStartStop/group/no-preload/serial/DeployApp 10.55
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.49
346 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.91
347 TestStartStop/group/old-k8s-version/serial/Stop 13.14
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
349 TestStartStop/group/no-preload/serial/Stop 13.13
350 TestStartStop/group/embed-certs/serial/DeployApp 8.42
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.13
353 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
354 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
355 TestStartStop/group/old-k8s-version/serial/SecondStart 94.43
356 TestStartStop/group/embed-certs/serial/Stop 13.13
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
358 TestStartStop/group/no-preload/serial/SecondStart 325.41
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 338.52
361 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
362 TestStartStop/group/embed-certs/serial/SecondStart 383.68
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
366 TestStartStop/group/old-k8s-version/serial/Pause 2.99
368 TestStartStop/group/newest-cni/serial/FirstStart 71.61
369 TestStartStop/group/newest-cni/serial/DeployApp 0
370 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
371 TestStartStop/group/newest-cni/serial/Stop 13.14
372 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
373 TestStartStop/group/newest-cni/serial/SecondStart 47.13
374 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
376 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
377 TestStartStop/group/newest-cni/serial/Pause 2.69
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
379 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
380 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
381 TestStartStop/group/no-preload/serial/Pause 2.72
382 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
383 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
384 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
385 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.61
386 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
387 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
388 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
389 TestStartStop/group/embed-certs/serial/Pause 2.53
x
+
TestDownloadOnly/v1.16.0/json-events (6.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-168841 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-168841 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (6.930624367s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-168841
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-168841: exit status 85 (71.106641ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-168841 | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:43 UTC |          |
	|         | -p download-only-168841        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 23:43:36
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 23:43:36.490377   14474 out.go:296] Setting OutFile to fd 1 ...
	I1031 23:43:36.490625   14474 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:43:36.490635   14474 out.go:309] Setting ErrFile to fd 2...
	I1031 23:43:36.490639   14474 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:43:36.490798   14474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
	W1031 23:43:36.490904   14474 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17486-7251/.minikube/config/config.json: open /home/jenkins/minikube-integration/17486-7251/.minikube/config/config.json: no such file or directory
	I1031 23:43:36.491489   14474 out.go:303] Setting JSON to true
	I1031 23:43:36.492328   14474 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1566,"bootTime":1698794251,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 23:43:36.492385   14474 start.go:138] virtualization: kvm guest
	I1031 23:43:36.495023   14474 out.go:97] [download-only-168841] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 23:43:36.496598   14474 out.go:169] MINIKUBE_LOCATION=17486
	W1031 23:43:36.495129   14474 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball: no such file or directory
	I1031 23:43:36.495160   14474 notify.go:220] Checking for updates...
	I1031 23:43:36.499709   14474 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 23:43:36.501246   14474 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1031 23:43:36.502662   14474 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	I1031 23:43:36.504144   14474 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1031 23:43:36.506972   14474 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1031 23:43:36.507183   14474 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 23:43:36.607992   14474 out.go:97] Using the kvm2 driver based on user configuration
	I1031 23:43:36.608014   14474 start.go:298] selected driver: kvm2
	I1031 23:43:36.608020   14474 start.go:902] validating driver "kvm2" against <nil>
	I1031 23:43:36.608330   14474 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 23:43:36.608456   14474 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17486-7251/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 23:43:36.622642   14474 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 23:43:36.622700   14474 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 23:43:36.623185   14474 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1031 23:43:36.623373   14474 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1031 23:43:36.623438   14474 cni.go:84] Creating CNI manager for ""
	I1031 23:43:36.623457   14474 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1031 23:43:36.623467   14474 start_flags.go:323] config:
	{Name:download-only-168841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-168841 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 23:43:36.623677   14474 iso.go:125] acquiring lock: {Name:mk56e0e42e3cb427bae1fd4521b75db693021ac1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 23:43:36.625853   14474 out.go:97] Downloading VM boot image ...
	I1031 23:43:36.625894   14474 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17486-7251/.minikube/cache/iso/amd64/minikube-v1.32.0-1698773592-17486-amd64.iso
	I1031 23:43:39.669548   14474 out.go:97] Starting control plane node download-only-168841 in cluster download-only-168841
	I1031 23:43:39.669573   14474 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1031 23:43:39.696910   14474 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1031 23:43:39.696959   14474 cache.go:56] Caching tarball of preloaded images
	I1031 23:43:39.697134   14474 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1031 23:43:39.699265   14474 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1031 23:43:39.699292   14474 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1031 23:43:39.727673   14474 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17486-7251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-168841"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (4.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-168841 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-168841 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 : (4.743415495s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (4.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-168841
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-168841: exit status 85 (72.099988ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-168841 | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:43 UTC |          |
	|         | -p download-only-168841        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	| start   | -o=json --download-only        | download-only-168841 | jenkins | v1.32.0-beta.0 | 31 Oct 23 23:43 UTC |          |
	|         | -p download-only-168841        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 23:43:43
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 23:43:43.494685   14522 out.go:296] Setting OutFile to fd 1 ...
	I1031 23:43:43.494972   14522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:43:43.494984   14522 out.go:309] Setting ErrFile to fd 2...
	I1031 23:43:43.494992   14522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:43:43.495210   14522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
	W1031 23:43:43.495344   14522 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17486-7251/.minikube/config/config.json: open /home/jenkins/minikube-integration/17486-7251/.minikube/config/config.json: no such file or directory
	I1031 23:43:43.495837   14522 out.go:303] Setting JSON to true
	I1031 23:43:43.496750   14522 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1573,"bootTime":1698794251,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 23:43:43.496820   14522 start.go:138] virtualization: kvm guest
	I1031 23:43:43.499121   14522 out.go:97] [download-only-168841] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 23:43:43.500840   14522 out.go:169] MINIKUBE_LOCATION=17486
	I1031 23:43:43.499277   14522 notify.go:220] Checking for updates...
	I1031 23:43:43.503859   14522 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 23:43:43.505267   14522 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1031 23:43:43.506708   14522 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	I1031 23:43:43.508137   14522 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-168841"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-168841
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-358774 --alsologtostderr --binary-mirror http://127.0.0.1:36063 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-358774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-358774
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (70.67s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-822434 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-822434 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m9.800548875s)
helpers_test.go:175: Cleaning up "offline-docker-822434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-822434
--- PASS: TestOffline (70.67s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-424039
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-424039: exit status 85 (70.942729ms)

                                                
                                                
-- stdout --
	* Profile "addons-424039" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-424039"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-424039
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-424039: exit status 85 (73.294511ms)

                                                
                                                
-- stdout --
	* Profile "addons-424039" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-424039"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (150.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-424039 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-424039 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m30.499937679s)
--- PASS: TestAddons/Setup (150.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 22.371223ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-2gxnt" [1448bf0c-baab-4579-8156-544af0f9940c] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.025193239s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r58qj" [b5fa8968-1459-49d6-a1b3-76ee4761c10e] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.017646982s
addons_test.go:339: (dbg) Run:  kubectl --context addons-424039 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-424039 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-424039 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.225695149s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-424039 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-424039 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-424039 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-424039 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-424039 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a6a5ac97-2bd9-4cc4-9c2f-c3c87b6ba193] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a6a5ac97-2bd9-4cc4-9c2f-c3c87b6ba193] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.021247942s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-424039 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-424039 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-424039 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.98
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-424039 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-424039 addons disable ingress-dns --alsologtostderr -v=1: (2.213748953s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-424039 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-424039 addons disable ingress --alsologtostderr -v=1: (7.778639559s)
--- PASS: TestAddons/parallel/Ingress (22.61s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-j7gwp" [b049c5e3-ae3a-46eb-8505-3fbf1cabc1c6] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.014542435s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-424039
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-424039: (5.714346452s)
--- PASS: TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.16s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 6.751229ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-7tsbg" [16bf6f85-1b51-48d4-bf08-dc7379da7bbe] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.017660121s
addons_test.go:414: (dbg) Run:  kubectl --context addons-424039 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-424039 addons disable metrics-server --alsologtostderr -v=1
2023/10/31 23:46:36 [DEBUG] GET http://192.168.39.98:5000
addons_test.go:431: (dbg) Done: out/minikube-linux-amd64 -p addons-424039 addons disable metrics-server --alsologtostderr -v=1: (1.055740565s)
--- PASS: TestAddons/parallel/MetricsServer (6.16s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (20.44s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 4.173706ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-bcmt2" [39013eef-e41b-4062-9c84-09b82e921144] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.018618118s
addons_test.go:472: (dbg) Run:  kubectl --context addons-424039 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-424039 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (11.367384998s)
addons_test.go:477: kubectl --context addons-424039 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:472: (dbg) Run:  kubectl --context addons-424039 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-424039 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.209334008s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-424039 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (20.44s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 22.906549ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-424039 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-424039 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c9b9463a-671d-49ab-9549-6986a31596f7] Pending
helpers_test.go:344: "task-pv-pod" [c9b9463a-671d-49ab-9549-6986a31596f7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c9b9463a-671d-49ab-9549-6986a31596f7] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.068117831s
addons_test.go:583: (dbg) Run:  kubectl --context addons-424039 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-424039 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-424039 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-424039 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-424039 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-424039 delete pod task-pv-pod: (1.003287259s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-424039 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-424039 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-424039 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7cead23d-20c7-48ea-8ed6-d6d99ce157a3] Pending
helpers_test.go:344: "task-pv-pod-restore" [7cead23d-20c7-48ea-8ed6-d6d99ce157a3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7cead23d-20c7-48ea-8ed6-d6d99ce157a3] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.022194935s
addons_test.go:625: (dbg) Run:  kubectl --context addons-424039 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-424039 delete pod task-pv-pod-restore: (1.165297187s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-424039 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-424039 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-424039 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-424039 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.763020942s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-424039 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-424039 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-424039 --alsologtostderr -v=1: (2.115859658s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-6j775" [05b9e955-60e3-4a2f-a73e-e059ebec2188] Pending
helpers_test.go:344: "headlamp-94b766c-6j775" [05b9e955-60e3-4a2f-a73e-e059ebec2188] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-6j775" [05b9e955-60e3-4a2f-a73e-e059ebec2188] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.037329722s
--- PASS: TestAddons/parallel/Headlamp (18.15s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-hbdc8" [e7a45f88-1885-464f-9d6c-8676a66be014] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012893749s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-424039
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.66s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-424039 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-424039 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-424039 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [67f30351-fc1d-412d-9980-c34bb789311d] Pending
helpers_test.go:344: "test-local-path" [67f30351-fc1d-412d-9980-c34bb789311d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [67f30351-fc1d-412d-9980-c34bb789311d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [67f30351-fc1d-412d-9980-c34bb789311d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.040991464s
addons_test.go:890: (dbg) Run:  kubectl --context addons-424039 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-424039 ssh "cat /opt/local-path-provisioner/pvc-4e5fce19-d185-484c-90bc-2c923d286be3_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-424039 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-424039 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-424039 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-424039 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.873351007s)
--- PASS: TestAddons/parallel/LocalPath (55.66s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fbswq" [6c39df85-12ef-48f0-8938-25d4861892bf] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.042512507s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-424039
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-424039 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-424039 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-424039
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-424039: (13.111729341s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-424039
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-424039
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-424039
--- PASS: TestAddons/StoppedEnableDisable (13.42s)

                                                
                                    
x
+
TestCertOptions (90.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-027908 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-027908 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m27.928363182s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-027908 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-027908 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-027908 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-027908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-027908
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-027908: (1.659967254s)
--- PASS: TestCertOptions (90.14s)

                                                
                                    
x
+
TestCertExpiration (345.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-775817 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-775817 --memory=2048 --cert-expiration=3m --driver=kvm2 : (2m7.912911398s)
E1101 00:20:43.098013   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-775817 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-775817 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (36.403987424s)
helpers_test.go:175: Cleaning up "cert-expiration-775817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-775817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-775817: (1.381022873s)
--- PASS: TestCertExpiration (345.70s)

                                                
                                    
x
+
TestDockerFlags (100.89s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-751607 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-751607 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m39.420981157s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-751607 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-751607 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-751607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-751607
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-751607: (1.02066659s)
--- PASS: TestDockerFlags (100.89s)

                                                
                                    
x
+
TestForceSystemdFlag (82.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-848352 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-848352 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m20.860428723s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-848352 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-848352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-848352
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-848352: (1.053873253s)
--- PASS: TestForceSystemdFlag (82.19s)

                                                
                                    
x
+
TestForceSystemdEnv (71.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-822149 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
E1101 00:24:22.860584   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1101 00:24:41.689089   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-822149 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m9.939952983s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-822149 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-822149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-822149
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-822149: (1.409984325s)
--- PASS: TestForceSystemdEnv (71.80s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.01s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.01s)

                                                
                                    
x
+
TestErrorSpam/setup (50.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-839299 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-839299 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-839299 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-839299 --driver=kvm2 : (50.238598962s)
--- PASS: TestErrorSpam/setup (50.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 pause
--- PASS: TestErrorSpam/pause (1.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (4.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 stop: (4.097245306s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839299 --log_dir /tmp/nospam-839299 stop
--- PASS: TestErrorSpam/stop (4.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17486-7251/.minikube/files/etc/test/nested/copy/14463/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (102.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-238689 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-238689 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m42.764982206s)
--- PASS: TestFunctional/serial/StartWithProxy (102.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-238689 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-238689 --alsologtostderr -v=8: (36.714021693s)
functional_test.go:659: soft start took 36.714742587s for "functional-238689" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-238689 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-238689 /tmp/TestFunctionalserialCacheCmdcacheadd_local459916979/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 cache add minikube-local-cache-test:functional-238689
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-238689 cache add minikube-local-cache-test:functional-238689: (1.013033706s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 cache delete minikube-local-cache-test:functional-238689
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-238689
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh sudo crictl images
E1031 23:51:19.813570   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1031 23:51:19.820084   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh sudo docker rmi registry.k8s.io/pause:latest
E1031 23:51:19.831221   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1031 23:51:19.851516   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1031 23:51:19.891791   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1031 23:51:19.972117   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E1031 23:51:20.132436   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238689 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (255.498779ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 cache reload
E1031 23:51:20.453127   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
E1031 23:51:21.093566   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 kubectl -- --context functional-238689 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-238689 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-238689 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1031 23:51:22.373981   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1031 23:51:24.935347   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1031 23:51:30.055678   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1031 23:51:40.296460   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1031 23:52:00.777031   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-238689 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.219159185s)
functional_test.go:757: restart took 42.219282096s for "functional-238689" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-238689 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-238689 logs: (1.099706491s)
--- PASS: TestFunctional/serial/LogsCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 logs --file /tmp/TestFunctionalserialLogsFileCmd3859138977/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-238689 logs --file /tmp/TestFunctionalserialLogsFileCmd3859138977/001/logs.txt: (1.085490897s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.09s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-238689 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-238689
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-238689: exit status 115 (301.576894ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.177:31397 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-238689 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238689 config get cpus: exit status 14 (67.759607ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238689 config get cpus: exit status 14 (67.883759ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (41.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-238689 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-238689 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21266: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (41.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-238689 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-238689 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (173.062237ms)

                                                
                                                
-- stdout --
	* [functional-238689] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 23:52:27.159288   20966 out.go:296] Setting OutFile to fd 1 ...
	I1031 23:52:27.159474   20966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:52:27.159485   20966 out.go:309] Setting ErrFile to fd 2...
	I1031 23:52:27.159492   20966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:52:27.159714   20966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
	I1031 23:52:27.160299   20966 out.go:303] Setting JSON to false
	I1031 23:52:27.161358   20966 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2096,"bootTime":1698794251,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 23:52:27.161427   20966 start.go:138] virtualization: kvm guest
	I1031 23:52:27.164076   20966 out.go:177] * [functional-238689] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 23:52:27.165671   20966 out.go:177]   - MINIKUBE_LOCATION=17486
	I1031 23:52:27.165725   20966 notify.go:220] Checking for updates...
	I1031 23:52:27.167357   20966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 23:52:27.169282   20966 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1031 23:52:27.170908   20966 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	I1031 23:52:27.172613   20966 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 23:52:27.174194   20966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 23:52:27.176361   20966 config.go:182] Loaded profile config "functional-238689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 23:52:27.177007   20966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 23:52:27.177080   20966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:52:27.193877   20966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33663
	I1031 23:52:27.194293   20966 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:52:27.195001   20966 main.go:141] libmachine: Using API Version  1
	I1031 23:52:27.195031   20966 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:52:27.195436   20966 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:52:27.195624   20966 main.go:141] libmachine: (functional-238689) Calling .DriverName
	I1031 23:52:27.195922   20966 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 23:52:27.196339   20966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 23:52:27.196390   20966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:52:27.215205   20966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42969
	I1031 23:52:27.215716   20966 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:52:27.216255   20966 main.go:141] libmachine: Using API Version  1
	I1031 23:52:27.216301   20966 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:52:27.216648   20966 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:52:27.216840   20966 main.go:141] libmachine: (functional-238689) Calling .DriverName
	I1031 23:52:27.254320   20966 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 23:52:27.255945   20966 start.go:298] selected driver: kvm2
	I1031 23:52:27.255966   20966 start.go:902] validating driver "kvm2" against &{Name:functional-238689 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-238689 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.177 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 23:52:27.256128   20966 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 23:52:27.258551   20966 out.go:177] 
	W1031 23:52:27.260128   20966 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1031 23:52:27.261662   20966 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-238689 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-238689 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-238689 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (158.388496ms)

                                                
                                                
-- stdout --
	* [functional-238689] minikube v1.32.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 23:52:27.479703   21026 out.go:296] Setting OutFile to fd 1 ...
	I1031 23:52:27.480009   21026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:52:27.480021   21026 out.go:309] Setting ErrFile to fd 2...
	I1031 23:52:27.480029   21026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 23:52:27.480461   21026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
	I1031 23:52:27.481127   21026 out.go:303] Setting JSON to false
	I1031 23:52:27.482345   21026 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2097,"bootTime":1698794251,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 23:52:27.482432   21026 start.go:138] virtualization: kvm guest
	I1031 23:52:27.484878   21026 out.go:177] * [functional-238689] minikube v1.32.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I1031 23:52:27.486492   21026 out.go:177]   - MINIKUBE_LOCATION=17486
	I1031 23:52:27.486547   21026 notify.go:220] Checking for updates...
	I1031 23:52:27.488377   21026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 23:52:27.490321   21026 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	I1031 23:52:27.491833   21026 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	I1031 23:52:27.493269   21026 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 23:52:27.495032   21026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 23:52:27.497261   21026 config.go:182] Loaded profile config "functional-238689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 23:52:27.497860   21026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 23:52:27.497970   21026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:52:27.512317   21026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I1031 23:52:27.512703   21026 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:52:27.513335   21026 main.go:141] libmachine: Using API Version  1
	I1031 23:52:27.513361   21026 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:52:27.513733   21026 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:52:27.513968   21026 main.go:141] libmachine: (functional-238689) Calling .DriverName
	I1031 23:52:27.514216   21026 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 23:52:27.514561   21026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 23:52:27.514601   21026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 23:52:27.528380   21026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40507
	I1031 23:52:27.528802   21026 main.go:141] libmachine: () Calling .GetVersion
	I1031 23:52:27.529267   21026 main.go:141] libmachine: Using API Version  1
	I1031 23:52:27.529292   21026 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 23:52:27.529599   21026 main.go:141] libmachine: () Calling .GetMachineName
	I1031 23:52:27.529810   21026 main.go:141] libmachine: (functional-238689) Calling .DriverName
	I1031 23:52:27.565234   21026 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1031 23:52:27.566826   21026 start.go:298] selected driver: kvm2
	I1031 23:52:27.566846   21026 start.go:902] validating driver "kvm2" against &{Name:functional-238689 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17486/minikube-v1.32.0-1698773592-17486-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-238689 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.177 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 23:52:27.566936   21026 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 23:52:27.569309   21026 out.go:177] 
	W1031 23:52:27.570821   21026 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1031 23:52:27.572344   21026 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-238689 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-238689 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-n7f2k" [7e92d5cd-95e0-4616-9d0b-6b8535dfa3d2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-n7f2k" [7e92d5cd-95e0-4616-9d0b-6b8535dfa3d2] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.020656959s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.177:32654
functional_test.go:1674: http://192.168.50.177:32654: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-n7f2k

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.177:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.177:32654
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (57.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6b32500f-731f-42e0-af1a-6823f658e492] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.016863374s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-238689 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-238689 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-238689 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-238689 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-238689 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9cfb76c6-a3a5-4a18-ab17-b9387b7ba4b8] Pending
helpers_test.go:344: "sp-pod" [9cfb76c6-a3a5-4a18-ab17-b9387b7ba4b8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9cfb76c6-a3a5-4a18-ab17-b9387b7ba4b8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.030003266s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-238689 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-238689 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-238689 delete -f testdata/storage-provisioner/pod.yaml: (1.242687363s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-238689 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [486ab679-a790-40e3-ba6a-5e5eddb0e711] Pending
helpers_test.go:344: "sp-pod" [486ab679-a790-40e3-ba6a-5e5eddb0e711] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1031 23:52:41.737650   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [486ab679-a790-40e3-ba6a-5e5eddb0e711] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.017472733s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-238689 exec sp-pod -- ls /tmp/mount
2023/10/31 23:53:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (57.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh -n functional-238689 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 cp functional-238689:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3132558382/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh -n functional-238689 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (39.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-238689 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-5w7jf" [37b564c8-4c55-40e0-904c-1ccb81fd819d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-5w7jf" [37b564c8-4c55-40e0-904c-1ccb81fd819d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.022434408s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-238689 exec mysql-859648c796-5w7jf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-238689 exec mysql-859648c796-5w7jf -- mysql -ppassword -e "show databases;": exit status 1 (187.018363ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-238689 exec mysql-859648c796-5w7jf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-238689 exec mysql-859648c796-5w7jf -- mysql -ppassword -e "show databases;": exit status 1 (631.616535ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-238689 exec mysql-859648c796-5w7jf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-238689 exec mysql-859648c796-5w7jf -- mysql -ppassword -e "show databases;": exit status 1 (496.334648ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-238689 exec mysql-859648c796-5w7jf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-238689 exec mysql-859648c796-5w7jf -- mysql -ppassword -e "show databases;": exit status 1 (231.96928ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-238689 exec mysql-859648c796-5w7jf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (39.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/14463/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "sudo cat /etc/test/nested/copy/14463/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/14463.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "sudo cat /etc/ssl/certs/14463.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/14463.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "sudo cat /usr/share/ca-certificates/14463.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/144632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "sudo cat /etc/ssl/certs/144632.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/144632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "sudo cat /usr/share/ca-certificates/144632.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-238689 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238689 ssh "sudo systemctl is-active crio": exit status 1 (257.963815ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-238689 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-238689
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-238689
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-238689 image ls --format short --alsologtostderr:
I1031 23:52:58.964499   21946 out.go:296] Setting OutFile to fd 1 ...
I1031 23:52:58.964668   21946 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:52:58.964681   21946 out.go:309] Setting ErrFile to fd 2...
I1031 23:52:58.964687   21946 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:52:58.964891   21946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
I1031 23:52:58.965492   21946 config.go:182] Loaded profile config "functional-238689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 23:52:58.965588   21946 config.go:182] Loaded profile config "functional-238689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 23:52:58.965956   21946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 23:52:58.966003   21946 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:52:58.980747   21946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46819
I1031 23:52:58.981217   21946 main.go:141] libmachine: () Calling .GetVersion
I1031 23:52:58.981874   21946 main.go:141] libmachine: Using API Version  1
I1031 23:52:58.981907   21946 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:52:58.982254   21946 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:52:58.982468   21946 main.go:141] libmachine: (functional-238689) Calling .GetState
I1031 23:52:58.984270   21946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 23:52:58.984306   21946 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:52:58.998675   21946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42391
I1031 23:52:58.999179   21946 main.go:141] libmachine: () Calling .GetVersion
I1031 23:52:58.999802   21946 main.go:141] libmachine: Using API Version  1
I1031 23:52:58.999841   21946 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:52:59.000162   21946 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:52:59.000353   21946 main.go:141] libmachine: (functional-238689) Calling .DriverName
I1031 23:52:59.000562   21946 ssh_runner.go:195] Run: systemctl --version
I1031 23:52:59.000587   21946 main.go:141] libmachine: (functional-238689) Calling .GetSSHHostname
I1031 23:52:59.003441   21946 main.go:141] libmachine: (functional-238689) DBG | domain functional-238689 has defined MAC address 52:54:00:c7:c1:d7 in network mk-functional-238689
I1031 23:52:59.003883   21946 main.go:141] libmachine: (functional-238689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c1:d7", ip: ""} in network mk-functional-238689: {Iface:virbr1 ExpiryTime:2023-11-01 00:49:11 +0000 UTC Type:0 Mac:52:54:00:c7:c1:d7 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:functional-238689 Clientid:01:52:54:00:c7:c1:d7}
I1031 23:52:59.003921   21946 main.go:141] libmachine: (functional-238689) DBG | domain functional-238689 has defined IP address 192.168.50.177 and MAC address 52:54:00:c7:c1:d7 in network mk-functional-238689
I1031 23:52:59.004067   21946 main.go:141] libmachine: (functional-238689) Calling .GetSSHPort
I1031 23:52:59.004249   21946 main.go:141] libmachine: (functional-238689) Calling .GetSSHKeyPath
I1031 23:52:59.004478   21946 main.go:141] libmachine: (functional-238689) Calling .GetSSHUsername
I1031 23:52:59.004671   21946 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/functional-238689/id_rsa Username:docker}
I1031 23:52:59.133990   21946 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1031 23:52:59.179569   21946 main.go:141] libmachine: Making call to close driver server
I1031 23:52:59.179585   21946 main.go:141] libmachine: (functional-238689) Calling .Close
I1031 23:52:59.179908   21946 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:52:59.179949   21946 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:52:59.179911   21946 main.go:141] libmachine: (functional-238689) DBG | Closing plugin on server side
I1031 23:52:59.179961   21946 main.go:141] libmachine: Making call to close driver server
I1031 23:52:59.180023   21946 main.go:141] libmachine: (functional-238689) Calling .Close
I1031 23:52:59.180285   21946 main.go:141] libmachine: (functional-238689) DBG | Closing plugin on server side
I1031 23:52:59.180324   21946 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:52:59.180340   21946 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-238689 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-238689 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/mysql                     | 5.7               | 547b3c3c15a96 | 501MB  |
| registry.k8s.io/kube-scheduler              | v1.28.3           | 6d1b4fd1b182d | 60.1MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 593aee2afb642 | 187MB  |
| registry.k8s.io/kube-apiserver              | v1.28.3           | 5374347291230 | 126MB  |
| registry.k8s.io/kube-proxy                  | v1.28.3           | bfc896cf80fba | 73.1MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/localhost/my-image                | functional-238689 | 06447fbead36b | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-238689 | cdd1e7fcce7db | 30B    |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.3           | 10baa1ca17068 | 122MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-238689 image ls --format table --alsologtostderr:
I1031 23:53:02.868821   22114 out.go:296] Setting OutFile to fd 1 ...
I1031 23:53:02.868929   22114 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:53:02.868933   22114 out.go:309] Setting ErrFile to fd 2...
I1031 23:53:02.868937   22114 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:53:02.869123   22114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
I1031 23:53:02.869679   22114 config.go:182] Loaded profile config "functional-238689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 23:53:02.869781   22114 config.go:182] Loaded profile config "functional-238689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 23:53:02.870147   22114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 23:53:02.870195   22114 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:53:02.884533   22114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32771
I1031 23:53:02.885017   22114 main.go:141] libmachine: () Calling .GetVersion
I1031 23:53:02.885590   22114 main.go:141] libmachine: Using API Version  1
I1031 23:53:02.885617   22114 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:53:02.885943   22114 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:53:02.886122   22114 main.go:141] libmachine: (functional-238689) Calling .GetState
I1031 23:53:02.887983   22114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 23:53:02.888035   22114 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:53:02.902541   22114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42279
I1031 23:53:02.902967   22114 main.go:141] libmachine: () Calling .GetVersion
I1031 23:53:02.903500   22114 main.go:141] libmachine: Using API Version  1
I1031 23:53:02.903542   22114 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:53:02.903879   22114 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:53:02.904077   22114 main.go:141] libmachine: (functional-238689) Calling .DriverName
I1031 23:53:02.904318   22114 ssh_runner.go:195] Run: systemctl --version
I1031 23:53:02.904352   22114 main.go:141] libmachine: (functional-238689) Calling .GetSSHHostname
I1031 23:53:02.906915   22114 main.go:141] libmachine: (functional-238689) DBG | domain functional-238689 has defined MAC address 52:54:00:c7:c1:d7 in network mk-functional-238689
I1031 23:53:02.907268   22114 main.go:141] libmachine: (functional-238689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c1:d7", ip: ""} in network mk-functional-238689: {Iface:virbr1 ExpiryTime:2023-11-01 00:49:11 +0000 UTC Type:0 Mac:52:54:00:c7:c1:d7 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:functional-238689 Clientid:01:52:54:00:c7:c1:d7}
I1031 23:53:02.907303   22114 main.go:141] libmachine: (functional-238689) DBG | domain functional-238689 has defined IP address 192.168.50.177 and MAC address 52:54:00:c7:c1:d7 in network mk-functional-238689
I1031 23:53:02.907445   22114 main.go:141] libmachine: (functional-238689) Calling .GetSSHPort
I1031 23:53:02.907597   22114 main.go:141] libmachine: (functional-238689) Calling .GetSSHKeyPath
I1031 23:53:02.907718   22114 main.go:141] libmachine: (functional-238689) Calling .GetSSHUsername
I1031 23:53:02.907808   22114 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/functional-238689/id_rsa Username:docker}
I1031 23:53:02.996740   22114 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1031 23:53:03.022312   22114 main.go:141] libmachine: Making call to close driver server
I1031 23:53:03.022328   22114 main.go:141] libmachine: (functional-238689) Calling .Close
I1031 23:53:03.022568   22114 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:53:03.022588   22114 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:53:03.022603   22114 main.go:141] libmachine: Making call to close driver server
I1031 23:53:03.022613   22114 main.go:141] libmachine: (functional-238689) Calling .Close
I1031 23:53:03.022872   22114 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:53:03.022891   22114 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:53:03.022923   22114 main.go:141] libmachine: (functional-238689) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-238689 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"73100000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b51
6b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-238689"],"size":"32900000"},{"id":"593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8
s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"06447fbead36b9f3a515e4103ec20e3f8fbc58496cff62318ecebf244c557431","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-238689"],"size":"1240000"},{"id":"cdd1e7fcce7db84f0efc08f4bac93f4567a4ad73c791b4aecab3fc8983a86176","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-238689"],"size":"30"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"
],"size":"43800000"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"126000000"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"60100000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-238689 image ls --format json --alsologtostderr:
I1031 23:53:02.655781   22091 out.go:296] Setting OutFile to fd 1 ...
I1031 23:53:02.655932   22091 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:53:02.655943   22091 out.go:309] Setting ErrFile to fd 2...
I1031 23:53:02.655953   22091 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:53:02.656192   22091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
I1031 23:53:02.656804   22091 config.go:182] Loaded profile config "functional-238689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 23:53:02.656924   22091 config.go:182] Loaded profile config "functional-238689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 23:53:02.657327   22091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 23:53:02.657388   22091 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:53:02.671964   22091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33633
I1031 23:53:02.672427   22091 main.go:141] libmachine: () Calling .GetVersion
I1031 23:53:02.673046   22091 main.go:141] libmachine: Using API Version  1
I1031 23:53:02.673069   22091 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:53:02.673417   22091 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:53:02.673580   22091 main.go:141] libmachine: (functional-238689) Calling .GetState
I1031 23:53:02.675292   22091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 23:53:02.675329   22091 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:53:02.690114   22091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
I1031 23:53:02.690621   22091 main.go:141] libmachine: () Calling .GetVersion
I1031 23:53:02.691074   22091 main.go:141] libmachine: Using API Version  1
I1031 23:53:02.691092   22091 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:53:02.691433   22091 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:53:02.691641   22091 main.go:141] libmachine: (functional-238689) Calling .DriverName
I1031 23:53:02.691848   22091 ssh_runner.go:195] Run: systemctl --version
I1031 23:53:02.691878   22091 main.go:141] libmachine: (functional-238689) Calling .GetSSHHostname
I1031 23:53:02.694570   22091 main.go:141] libmachine: (functional-238689) DBG | domain functional-238689 has defined MAC address 52:54:00:c7:c1:d7 in network mk-functional-238689
I1031 23:53:02.694930   22091 main.go:141] libmachine: (functional-238689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c1:d7", ip: ""} in network mk-functional-238689: {Iface:virbr1 ExpiryTime:2023-11-01 00:49:11 +0000 UTC Type:0 Mac:52:54:00:c7:c1:d7 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:functional-238689 Clientid:01:52:54:00:c7:c1:d7}
I1031 23:53:02.694960   22091 main.go:141] libmachine: (functional-238689) DBG | domain functional-238689 has defined IP address 192.168.50.177 and MAC address 52:54:00:c7:c1:d7 in network mk-functional-238689
I1031 23:53:02.695125   22091 main.go:141] libmachine: (functional-238689) Calling .GetSSHPort
I1031 23:53:02.695309   22091 main.go:141] libmachine: (functional-238689) Calling .GetSSHKeyPath
I1031 23:53:02.695529   22091 main.go:141] libmachine: (functional-238689) Calling .GetSSHUsername
I1031 23:53:02.695667   22091 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/functional-238689/id_rsa Username:docker}
I1031 23:53:02.780758   22091 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1031 23:53:02.805265   22091 main.go:141] libmachine: Making call to close driver server
I1031 23:53:02.805286   22091 main.go:141] libmachine: (functional-238689) Calling .Close
I1031 23:53:02.805570   22091 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:53:02.805592   22091 main.go:141] libmachine: (functional-238689) DBG | Closing plugin on server side
I1031 23:53:02.805597   22091 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:53:02.805610   22091 main.go:141] libmachine: Making call to close driver server
I1031 23:53:02.805619   22091 main.go:141] libmachine: (functional-238689) Calling .Close
I1031 23:53:02.805841   22091 main.go:141] libmachine: (functional-238689) DBG | Closing plugin on server side
I1031 23:53:02.805955   22091 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:53:02.805994   22091 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-238689 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-238689
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: cdd1e7fcce7db84f0efc08f4bac93f4567a4ad73c791b4aecab3fc8983a86176
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-238689
size: "30"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "126000000"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "60100000"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "122000000"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "73100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-238689 image ls --format yaml --alsologtostderr:
I1031 23:52:59.244593   21969 out.go:296] Setting OutFile to fd 1 ...
I1031 23:52:59.244694   21969 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:52:59.244703   21969 out.go:309] Setting ErrFile to fd 2...
I1031 23:52:59.244708   21969 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:52:59.244897   21969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
I1031 23:52:59.245430   21969 config.go:182] Loaded profile config "functional-238689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 23:52:59.245530   21969 config.go:182] Loaded profile config "functional-238689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 23:52:59.245920   21969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 23:52:59.245964   21969 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:52:59.260853   21969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42911
I1031 23:52:59.261331   21969 main.go:141] libmachine: () Calling .GetVersion
I1031 23:52:59.261920   21969 main.go:141] libmachine: Using API Version  1
I1031 23:52:59.261944   21969 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:52:59.262274   21969 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:52:59.262445   21969 main.go:141] libmachine: (functional-238689) Calling .GetState
I1031 23:52:59.264267   21969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 23:52:59.264315   21969 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:52:59.278549   21969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
I1031 23:52:59.279005   21969 main.go:141] libmachine: () Calling .GetVersion
I1031 23:52:59.279504   21969 main.go:141] libmachine: Using API Version  1
I1031 23:52:59.279525   21969 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:52:59.279824   21969 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:52:59.279988   21969 main.go:141] libmachine: (functional-238689) Calling .DriverName
I1031 23:52:59.280199   21969 ssh_runner.go:195] Run: systemctl --version
I1031 23:52:59.280225   21969 main.go:141] libmachine: (functional-238689) Calling .GetSSHHostname
I1031 23:52:59.282909   21969 main.go:141] libmachine: (functional-238689) DBG | domain functional-238689 has defined MAC address 52:54:00:c7:c1:d7 in network mk-functional-238689
I1031 23:52:59.283365   21969 main.go:141] libmachine: (functional-238689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c1:d7", ip: ""} in network mk-functional-238689: {Iface:virbr1 ExpiryTime:2023-11-01 00:49:11 +0000 UTC Type:0 Mac:52:54:00:c7:c1:d7 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:functional-238689 Clientid:01:52:54:00:c7:c1:d7}
I1031 23:52:59.283394   21969 main.go:141] libmachine: (functional-238689) DBG | domain functional-238689 has defined IP address 192.168.50.177 and MAC address 52:54:00:c7:c1:d7 in network mk-functional-238689
I1031 23:52:59.283556   21969 main.go:141] libmachine: (functional-238689) Calling .GetSSHPort
I1031 23:52:59.283748   21969 main.go:141] libmachine: (functional-238689) Calling .GetSSHKeyPath
I1031 23:52:59.283899   21969 main.go:141] libmachine: (functional-238689) Calling .GetSSHUsername
I1031 23:52:59.284016   21969 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/functional-238689/id_rsa Username:docker}
I1031 23:52:59.426480   21969 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1031 23:52:59.458434   21969 main.go:141] libmachine: Making call to close driver server
I1031 23:52:59.458450   21969 main.go:141] libmachine: (functional-238689) Calling .Close
I1031 23:52:59.458835   21969 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:52:59.458859   21969 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:52:59.458873   21969 main.go:141] libmachine: Making call to close driver server
I1031 23:52:59.458865   21969 main.go:141] libmachine: (functional-238689) DBG | Closing plugin on server side
I1031 23:52:59.458884   21969 main.go:141] libmachine: (functional-238689) Calling .Close
I1031 23:52:59.459113   21969 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:52:59.459119   21969 main.go:141] libmachine: (functional-238689) DBG | Closing plugin on server side
I1031 23:52:59.459133   21969 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238689 ssh pgrep buildkitd: exit status 1 (223.640134ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image build -t localhost/my-image:functional-238689 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-238689 image build -t localhost/my-image:functional-238689 testdata/build --alsologtostderr: (2.66937629s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-238689 image build -t localhost/my-image:functional-238689 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 35f5a27bf4a0
Removing intermediate container 35f5a27bf4a0
---> 52ed5ce18055
Step 3/3 : ADD content.txt /
---> 06447fbead36
Successfully built 06447fbead36
Successfully tagged localhost/my-image:functional-238689
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-238689 image build -t localhost/my-image:functional-238689 testdata/build --alsologtostderr:
I1031 23:52:59.744786   22022 out.go:296] Setting OutFile to fd 1 ...
I1031 23:52:59.744944   22022 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:52:59.744955   22022 out.go:309] Setting ErrFile to fd 2...
I1031 23:52:59.744962   22022 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 23:52:59.745152   22022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
I1031 23:52:59.745735   22022 config.go:182] Loaded profile config "functional-238689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 23:52:59.746339   22022 config.go:182] Loaded profile config "functional-238689": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 23:52:59.746768   22022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 23:52:59.746841   22022 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:52:59.761769   22022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
I1031 23:52:59.762152   22022 main.go:141] libmachine: () Calling .GetVersion
I1031 23:52:59.762762   22022 main.go:141] libmachine: Using API Version  1
I1031 23:52:59.762793   22022 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:52:59.763116   22022 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:52:59.763307   22022 main.go:141] libmachine: (functional-238689) Calling .GetState
I1031 23:52:59.765523   22022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 23:52:59.765563   22022 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 23:52:59.780122   22022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46227
I1031 23:52:59.780599   22022 main.go:141] libmachine: () Calling .GetVersion
I1031 23:52:59.781189   22022 main.go:141] libmachine: Using API Version  1
I1031 23:52:59.781216   22022 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 23:52:59.781510   22022 main.go:141] libmachine: () Calling .GetMachineName
I1031 23:52:59.781678   22022 main.go:141] libmachine: (functional-238689) Calling .DriverName
I1031 23:52:59.781883   22022 ssh_runner.go:195] Run: systemctl --version
I1031 23:52:59.781911   22022 main.go:141] libmachine: (functional-238689) Calling .GetSSHHostname
I1031 23:52:59.784824   22022 main.go:141] libmachine: (functional-238689) DBG | domain functional-238689 has defined MAC address 52:54:00:c7:c1:d7 in network mk-functional-238689
I1031 23:52:59.785287   22022 main.go:141] libmachine: (functional-238689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:c1:d7", ip: ""} in network mk-functional-238689: {Iface:virbr1 ExpiryTime:2023-11-01 00:49:11 +0000 UTC Type:0 Mac:52:54:00:c7:c1:d7 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:functional-238689 Clientid:01:52:54:00:c7:c1:d7}
I1031 23:52:59.785324   22022 main.go:141] libmachine: (functional-238689) DBG | domain functional-238689 has defined IP address 192.168.50.177 and MAC address 52:54:00:c7:c1:d7 in network mk-functional-238689
I1031 23:52:59.785435   22022 main.go:141] libmachine: (functional-238689) Calling .GetSSHPort
I1031 23:52:59.785613   22022 main.go:141] libmachine: (functional-238689) Calling .GetSSHKeyPath
I1031 23:52:59.785742   22022 main.go:141] libmachine: (functional-238689) Calling .GetSSHUsername
I1031 23:52:59.785926   22022 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/functional-238689/id_rsa Username:docker}
I1031 23:52:59.872847   22022 build_images.go:151] Building image from path: /tmp/build.4223868555.tar
I1031 23:52:59.872920   22022 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1031 23:52:59.884075   22022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4223868555.tar
I1031 23:52:59.889516   22022 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4223868555.tar: stat -c "%s %y" /var/lib/minikube/build/build.4223868555.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4223868555.tar': No such file or directory
I1031 23:52:59.889557   22022 ssh_runner.go:362] scp /tmp/build.4223868555.tar --> /var/lib/minikube/build/build.4223868555.tar (3072 bytes)
I1031 23:52:59.913284   22022 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4223868555
I1031 23:52:59.922804   22022 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4223868555 -xf /var/lib/minikube/build/build.4223868555.tar
I1031 23:52:59.931703   22022 docker.go:347] Building image: /var/lib/minikube/build/build.4223868555
I1031 23:52:59.931769   22022 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-238689 /var/lib/minikube/build/build.4223868555
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1031 23:53:02.334341   22022 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-238689 /var/lib/minikube/build/build.4223868555: (2.402547596s)
I1031 23:53:02.334392   22022 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4223868555
I1031 23:53:02.343084   22022 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4223868555.tar
I1031 23:53:02.352360   22022 build_images.go:207] Built localhost/my-image:functional-238689 from /tmp/build.4223868555.tar
I1031 23:53:02.352384   22022 build_images.go:123] succeeded building to: functional-238689
I1031 23:53:02.352388   22022 build_images.go:124] failed building to: 
I1031 23:53:02.352404   22022 main.go:141] libmachine: Making call to close driver server
I1031 23:53:02.352421   22022 main.go:141] libmachine: (functional-238689) Calling .Close
I1031 23:53:02.352740   22022 main.go:141] libmachine: (functional-238689) DBG | Closing plugin on server side
I1031 23:53:02.352746   22022 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:53:02.352763   22022 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 23:53:02.352773   22022 main.go:141] libmachine: Making call to close driver server
I1031 23:53:02.352782   22022 main.go:141] libmachine: (functional-238689) Calling .Close
I1031 23:53:02.352993   22022 main.go:141] libmachine: (functional-238689) DBG | Closing plugin on server side
I1031 23:53:02.353038   22022 main.go:141] libmachine: Successfully made call to close driver server
I1031 23:53:02.353077   22022 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.346111234s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-238689
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-238689 docker-env) && out/minikube-linux-amd64 status -p functional-238689"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-238689 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-238689 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-238689 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-4l6mb" [3106d360-18f8-4202-aa56-4dcd1e01f866] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-4l6mb" [3106d360-18f8-4202-aa56-4dcd1e01f866] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.022134954s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image load --daemon gcr.io/google-containers/addon-resizer:functional-238689 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-238689 image load --daemon gcr.io/google-containers/addon-resizer:functional-238689 --alsologtostderr: (4.108848115s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image load --daemon gcr.io/google-containers/addon-resizer:functional-238689 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-238689 image load --daemon gcr.io/google-containers/addon-resizer:functional-238689 --alsologtostderr: (2.310482989s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.169596273s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-238689
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image load --daemon gcr.io/google-containers/addon-resizer:functional-238689 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-238689 image load --daemon gcr.io/google-containers/addon-resizer:functional-238689 --alsologtostderr: (3.637194539s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image save gcr.io/google-containers/addon-resizer:functional-238689 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-238689 image save gcr.io/google-containers/addon-resizer:functional-238689 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.289207512s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 version --short
--- PASS: TestFunctional/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-238689 version -o=json --components: (1.098658072s)
--- PASS: TestFunctional/parallel/Version/components (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 service list -o json
functional_test.go:1493: Took "388.837382ms" to run "out/minikube-linux-amd64 -p functional-238689 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.177:32581
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image rm gcr.io/google-containers/addon-resizer:functional-238689 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "315.775814ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "61.540479ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.177:32581
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "340.670319ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "65.910141ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (27.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-238689 /tmp/TestFunctionalparallelMountCmdany-port1422984554/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698796346601093920" to /tmp/TestFunctionalparallelMountCmdany-port1422984554/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698796346601093920" to /tmp/TestFunctionalparallelMountCmdany-port1422984554/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698796346601093920" to /tmp/TestFunctionalparallelMountCmdany-port1422984554/001/test-1698796346601093920
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238689 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.312821ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 31 23:52 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 31 23:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 31 23:52 test-1698796346601093920
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh cat /mount-9p/test-1698796346601093920
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-238689 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [352e38bb-24b2-46b9-94d5-4448b44a94d9] Pending
helpers_test.go:344: "busybox-mount" [352e38bb-24b2-46b9-94d5-4448b44a94d9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [352e38bb-24b2-46b9-94d5-4448b44a94d9] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [352e38bb-24b2-46b9-94d5-4448b44a94d9] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 24.30812272s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-238689 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-238689 /tmp/TestFunctionalparallelMountCmdany-port1422984554/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (27.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-238689 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.124064006s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-238689
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 image save --daemon gcr.io/google-containers/addon-resizer:functional-238689 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-238689 image save --daemon gcr.io/google-containers/addon-resizer:functional-238689 --alsologtostderr: (1.784935633s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-238689
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-238689 /tmp/TestFunctionalparallelMountCmdspecific-port624345505/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238689 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (248.690008ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-238689 /tmp/TestFunctionalparallelMountCmdspecific-port624345505/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238689 ssh "sudo umount -f /mount-9p": exit status 1 (228.082058ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-238689 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-238689 /tmp/TestFunctionalparallelMountCmdspecific-port624345505/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-238689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2348827945/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-238689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2348827945/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-238689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2348827945/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-238689 ssh "findmnt -T" /mount1: exit status 1 (289.767345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-238689 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-238689 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-238689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2348827945/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-238689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2348827945/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-238689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2348827945/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-238689
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-238689
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-238689
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (368.1s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-273376 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-273376 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m4.04680151s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-273376 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-273376 cache add gcr.io/k8s-minikube/gvisor-addon:2: (25.213998459s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-273376 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-273376 addons enable gvisor: (3.878987537s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [4e49d57d-6c1d-4710-a081-b32a380b59a4] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.025787324s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-273376 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [c811b164-2c53-4ad9-8ce1-81db57569524] Pending
helpers_test.go:344: "nginx-gvisor" [c811b164-2c53-4ad9-8ce1-81db57569524] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [c811b164-2c53-4ad9-8ce1-81db57569524] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 44.020940403s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-273376
E1101 00:23:40.247735   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-273376: (1m31.863473408s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-273376 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-273376 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m2.31782214s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [4e49d57d-6c1d-4710-a081-b32a380b59a4] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [4e49d57d-6c1d-4710-a081-b32a380b59a4] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.033616484s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [c811b164-2c53-4ad9-8ce1-81db57569524] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1101 00:26:19.813730   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.013492554s
helpers_test.go:175: Cleaning up "gvisor-273376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-273376
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-273376: (1.203219002s)
--- PASS: TestGvisorAddon (368.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (51.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-439358 --driver=kvm2 
E1031 23:54:03.658437   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-439358 --driver=kvm2 : (51.297744446s)
--- PASS: TestImageBuild/serial/Setup (51.30s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-439358
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-439358: (1.66562957s)
--- PASS: TestImageBuild/serial/NormalBuild (1.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-439358
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-439358: (1.329795315s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.33s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-439358
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.39s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-439358
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (78.98s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-779845 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-779845 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m18.976625405s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (78.98s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-779845 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-779845 addons enable ingress --alsologtostderr -v=5: (14.448359032s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-779845 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (39.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-779845 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-779845 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.228630408s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-779845 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-779845 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a5bfe175-b1cf-4b48-9653-ad5cb0c781db] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a5bfe175-b1cf-4b48-9653-ad5cb0c781db] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.016898181s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-779845 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-779845 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-779845 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.50.84
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-779845 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-779845 addons disable ingress-dns --alsologtostderr -v=1: (4.536301716s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-779845 addons disable ingress --alsologtostderr -v=1
E1031 23:56:19.814346   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-779845 addons disable ingress --alsologtostderr -v=1: (7.497749082s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (39.42s)

                                                
                                    
x
+
TestJSONOutput/start/Command (102.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-473425 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1031 23:56:47.498808   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1031 23:57:11.647597   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:11.652940   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:11.663269   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:11.683706   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:11.724083   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:11.804449   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:11.964919   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:12.285607   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:12.926575   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:14.207414   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:16.768289   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:21.889000   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:32.129928   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1031 23:57:52.610224   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-473425 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m42.213419794s)
--- PASS: TestJSONOutput/start/Command (102.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-473425 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-473425 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-473425 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-473425 --output=json --user=testUser: (8.103744623s)
--- PASS: TestJSONOutput/stop/Command (8.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-826138 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-826138 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.664326ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"418888da-9c65-4de6-be6d-75a7bbeb67c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-826138] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c4d4cd3-46ff-4cc1-9c61-4635a965cac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17486"}}
	{"specversion":"1.0","id":"e1224940-e43c-4474-aac4-9a9462a45389","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"76f63943-069d-4f9a-ac7d-f4dd82e253b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig"}}
	{"specversion":"1.0","id":"7a312e51-cef7-4ce8-8da4-2d5293c1d27c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube"}}
	{"specversion":"1.0","id":"89bf84ec-e1cd-4daa-9bae-acc64e2f497b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f0b2f132-4c77-4f18-827e-f338ed18e3c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"61f57937-b19a-4c69-a4d6-f3db3fdc500c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-826138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-826138
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (104.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-291556 --driver=kvm2 
E1031 23:58:33.570437   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-291556 --driver=kvm2 : (52.876714392s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-293408 --driver=kvm2 
E1031 23:59:55.490662   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-293408 --driver=kvm2 : (49.203254052s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-291556
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-293408
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-293408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-293408
helpers_test.go:175: Cleaning up "first-291556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-291556
--- PASS: TestMinikubeProfile (104.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-864947 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-864947 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.579186195s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-864947 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-864947 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-885268 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1101 00:00:43.098195   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:00:43.103486   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:00:43.113795   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:00:43.134101   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:00:43.174462   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:00:43.254853   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:00:43.415385   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:00:43.735982   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:00:44.376994   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:00:45.657501   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:00:48.219434   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:00:53.339645   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-885268 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.473422972s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-885268 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-885268 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-864947 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-885268 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-885268 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-885268
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-885268: (2.092022952s)
--- PASS: TestMountStart/serial/Stop (2.09s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.49s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-885268
E1101 00:01:03.580356   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:01:19.813978   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1101 00:01:24.061333   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-885268: (23.485503507s)
--- PASS: TestMountStart/serial/RestartStopped (24.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-885268 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-885268 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-391061 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1101 00:02:05.022259   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:02:11.647753   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1101 00:02:39.331183   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1101 00:03:26.943040   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-391061 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m11.428225257s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-391061 -- rollout status deployment/busybox: (3.393416921s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- exec busybox-5bc68d56bd-gm6t7 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- exec busybox-5bc68d56bd-kgjmh -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- exec busybox-5bc68d56bd-gm6t7 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- exec busybox-5bc68d56bd-kgjmh -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- exec busybox-5bc68d56bd-gm6t7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- exec busybox-5bc68d56bd-kgjmh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- exec busybox-5bc68d56bd-gm6t7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- exec busybox-5bc68d56bd-gm6t7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- exec busybox-5bc68d56bd-kgjmh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391061 -- exec busybox-5bc68d56bd-kgjmh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-391061 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-391061 -v 3 --alsologtostderr: (46.291092147s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.88s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 cp testdata/cp-test.txt multinode-391061:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 cp multinode-391061:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile415772365/001/cp-test_multinode-391061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 cp multinode-391061:/home/docker/cp-test.txt multinode-391061-m02:/home/docker/cp-test_multinode-391061_multinode-391061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061-m02 "sudo cat /home/docker/cp-test_multinode-391061_multinode-391061-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 cp multinode-391061:/home/docker/cp-test.txt multinode-391061-m03:/home/docker/cp-test_multinode-391061_multinode-391061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061-m03 "sudo cat /home/docker/cp-test_multinode-391061_multinode-391061-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 cp testdata/cp-test.txt multinode-391061-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 cp multinode-391061-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile415772365/001/cp-test_multinode-391061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 cp multinode-391061-m02:/home/docker/cp-test.txt multinode-391061:/home/docker/cp-test_multinode-391061-m02_multinode-391061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061 "sudo cat /home/docker/cp-test_multinode-391061-m02_multinode-391061.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 cp multinode-391061-m02:/home/docker/cp-test.txt multinode-391061-m03:/home/docker/cp-test_multinode-391061-m02_multinode-391061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061-m03 "sudo cat /home/docker/cp-test_multinode-391061-m02_multinode-391061-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 cp testdata/cp-test.txt multinode-391061-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 cp multinode-391061-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile415772365/001/cp-test_multinode-391061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 cp multinode-391061-m03:/home/docker/cp-test.txt multinode-391061:/home/docker/cp-test_multinode-391061-m03_multinode-391061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061 "sudo cat /home/docker/cp-test_multinode-391061-m03_multinode-391061.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 cp multinode-391061-m03:/home/docker/cp-test.txt multinode-391061-m02:/home/docker/cp-test_multinode-391061-m03_multinode-391061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 ssh -n multinode-391061-m02 "sudo cat /home/docker/cp-test_multinode-391061-m03_multinode-391061-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-391061 node stop m03: (2.425392483s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-391061 status: exit status 7 (458.25322ms)

                                                
                                                
-- stdout --
	multinode-391061
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-391061-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-391061-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-391061 status --alsologtostderr: exit status 7 (456.629695ms)

                                                
                                                
-- stdout --
	multinode-391061
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-391061-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-391061-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:04:44.879449   29115 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:04:44.879727   29115 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:04:44.879736   29115 out.go:309] Setting ErrFile to fd 2...
	I1101 00:04:44.879741   29115 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:04:44.879942   29115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
	I1101 00:04:44.880144   29115 out.go:303] Setting JSON to false
	I1101 00:04:44.880179   29115 mustload.go:65] Loading cluster: multinode-391061
	I1101 00:04:44.880327   29115 notify.go:220] Checking for updates...
	I1101 00:04:44.880607   29115 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:04:44.880631   29115 status.go:255] checking status of multinode-391061 ...
	I1101 00:04:44.881102   29115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:04:44.881197   29115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:04:44.901200   29115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40743
	I1101 00:04:44.901689   29115 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:04:44.902212   29115 main.go:141] libmachine: Using API Version  1
	I1101 00:04:44.902260   29115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:04:44.902622   29115 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:04:44.902818   29115 main.go:141] libmachine: (multinode-391061) Calling .GetState
	I1101 00:04:44.904504   29115 status.go:330] multinode-391061 host status = "Running" (err=<nil>)
	I1101 00:04:44.904518   29115 host.go:66] Checking if "multinode-391061" exists ...
	I1101 00:04:44.904825   29115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:04:44.904863   29115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:04:44.919764   29115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I1101 00:04:44.920226   29115 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:04:44.920715   29115 main.go:141] libmachine: Using API Version  1
	I1101 00:04:44.920755   29115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:04:44.921066   29115 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:04:44.921230   29115 main.go:141] libmachine: (multinode-391061) Calling .GetIP
	I1101 00:04:44.923842   29115 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:04:44.924249   29115 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:01:43 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:04:44.924290   29115 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:04:44.924402   29115 host.go:66] Checking if "multinode-391061" exists ...
	I1101 00:04:44.924733   29115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:04:44.924774   29115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:04:44.939724   29115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37313
	I1101 00:04:44.940160   29115 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:04:44.940679   29115 main.go:141] libmachine: Using API Version  1
	I1101 00:04:44.940709   29115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:04:44.941026   29115 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:04:44.941211   29115 main.go:141] libmachine: (multinode-391061) Calling .DriverName
	I1101 00:04:44.941402   29115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 00:04:44.941424   29115 main.go:141] libmachine: (multinode-391061) Calling .GetSSHHostname
	I1101 00:04:44.944183   29115 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:04:44.944630   29115 main.go:141] libmachine: (multinode-391061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:c2:69", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:01:43 +0000 UTC Type:0 Mac:52:54:00:b9:c2:69 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-391061 Clientid:01:52:54:00:b9:c2:69}
	I1101 00:04:44.944657   29115 main.go:141] libmachine: (multinode-391061) DBG | domain multinode-391061 has defined IP address 192.168.39.43 and MAC address 52:54:00:b9:c2:69 in network mk-multinode-391061
	I1101 00:04:44.944774   29115 main.go:141] libmachine: (multinode-391061) Calling .GetSSHPort
	I1101 00:04:44.944968   29115 main.go:141] libmachine: (multinode-391061) Calling .GetSSHKeyPath
	I1101 00:04:44.945106   29115 main.go:141] libmachine: (multinode-391061) Calling .GetSSHUsername
	I1101 00:04:44.945245   29115 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061/id_rsa Username:docker}
	I1101 00:04:45.042721   29115 ssh_runner.go:195] Run: systemctl --version
	I1101 00:04:45.047906   29115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:04:45.060794   29115 kubeconfig.go:92] found "multinode-391061" server: "https://192.168.39.43:8443"
	I1101 00:04:45.060818   29115 api_server.go:166] Checking apiserver status ...
	I1101 00:04:45.060856   29115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:04:45.074542   29115 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1895/cgroup
	I1101 00:04:45.083604   29115 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/podb1b3f1e5d8276558ad5f45ab6c7fece5/2b739c443c07e31a53c55bacfd90ea417d6c0bbdf5ee5cc544fdc4f2c8a1c993"
	I1101 00:04:45.083682   29115 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb1b3f1e5d8276558ad5f45ab6c7fece5/2b739c443c07e31a53c55bacfd90ea417d6c0bbdf5ee5cc544fdc4f2c8a1c993/freezer.state
	I1101 00:04:45.092815   29115 api_server.go:204] freezer state: "THAWED"
	I1101 00:04:45.092852   29115 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1101 00:04:45.097668   29115 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I1101 00:04:45.097694   29115 status.go:421] multinode-391061 apiserver status = Running (err=<nil>)
	I1101 00:04:45.097706   29115 status.go:257] multinode-391061 status: &{Name:multinode-391061 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 00:04:45.097727   29115 status.go:255] checking status of multinode-391061-m02 ...
	I1101 00:04:45.098028   29115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:04:45.098071   29115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:04:45.112579   29115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43745
	I1101 00:04:45.112985   29115 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:04:45.113471   29115 main.go:141] libmachine: Using API Version  1
	I1101 00:04:45.113495   29115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:04:45.113796   29115 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:04:45.113945   29115 main.go:141] libmachine: (multinode-391061-m02) Calling .GetState
	I1101 00:04:45.115549   29115 status.go:330] multinode-391061-m02 host status = "Running" (err=<nil>)
	I1101 00:04:45.115567   29115 host.go:66] Checking if "multinode-391061-m02" exists ...
	I1101 00:04:45.115904   29115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:04:45.115941   29115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:04:45.130466   29115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37007
	I1101 00:04:45.130888   29115 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:04:45.131320   29115 main.go:141] libmachine: Using API Version  1
	I1101 00:04:45.131343   29115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:04:45.131630   29115 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:04:45.131797   29115 main.go:141] libmachine: (multinode-391061-m02) Calling .GetIP
	I1101 00:04:45.134443   29115 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:04:45.134927   29115 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:03:04 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:04:45.134969   29115 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:04:45.135080   29115 host.go:66] Checking if "multinode-391061-m02" exists ...
	I1101 00:04:45.135377   29115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:04:45.135415   29115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:04:45.150994   29115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I1101 00:04:45.151392   29115 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:04:45.151873   29115 main.go:141] libmachine: Using API Version  1
	I1101 00:04:45.151899   29115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:04:45.152174   29115 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:04:45.152342   29115 main.go:141] libmachine: (multinode-391061-m02) Calling .DriverName
	I1101 00:04:45.152528   29115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 00:04:45.152554   29115 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHHostname
	I1101 00:04:45.155728   29115 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:04:45.156152   29115 main.go:141] libmachine: (multinode-391061-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:1a:84", ip: ""} in network mk-multinode-391061: {Iface:virbr1 ExpiryTime:2023-11-01 01:03:04 +0000 UTC Type:0 Mac:52:54:00:f1:1a:84 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-391061-m02 Clientid:01:52:54:00:f1:1a:84}
	I1101 00:04:45.156176   29115 main.go:141] libmachine: (multinode-391061-m02) DBG | domain multinode-391061-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:f1:1a:84 in network mk-multinode-391061
	I1101 00:04:45.156370   29115 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHPort
	I1101 00:04:45.156530   29115 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHKeyPath
	I1101 00:04:45.156703   29115 main.go:141] libmachine: (multinode-391061-m02) Calling .GetSSHUsername
	I1101 00:04:45.156852   29115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17486-7251/.minikube/machines/multinode-391061-m02/id_rsa Username:docker}
	I1101 00:04:45.245693   29115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:04:45.259120   29115 status.go:257] multinode-391061-m02 status: &{Name:multinode-391061-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 00:04:45.259150   29115 status.go:255] checking status of multinode-391061-m03 ...
	I1101 00:04:45.259860   29115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:04:45.259907   29115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:04:45.275416   29115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I1101 00:04:45.275863   29115 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:04:45.276345   29115 main.go:141] libmachine: Using API Version  1
	I1101 00:04:45.276366   29115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:04:45.276690   29115 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:04:45.276841   29115 main.go:141] libmachine: (multinode-391061-m03) Calling .GetState
	I1101 00:04:45.278659   29115 status.go:330] multinode-391061-m03 host status = "Stopped" (err=<nil>)
	I1101 00:04:45.278676   29115 status.go:343] host is not running, skipping remaining checks
	I1101 00:04:45.278683   29115 status.go:257] multinode-391061-m03 status: &{Name:multinode-391061-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-391061 node start m03 --alsologtostderr: (30.707517984s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (185.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-391061
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-391061
E1101 00:05:43.098134   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-391061: (28.49368123s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-391061 --wait=true -v=8 --alsologtostderr
E1101 00:06:10.783794   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:06:19.814379   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1101 00:07:11.647299   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1101 00:07:42.859531   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-391061 --wait=true -v=8 --alsologtostderr: (2m37.041269102s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-391061
--- PASS: TestMultiNode/serial/RestartKeepsNodes (185.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-391061 node delete m03: (1.223535381s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-391061 stop: (25.370879444s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-391061 status: exit status 7 (100.543376ms)

                                                
                                                
-- stdout --
	multinode-391061
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-391061-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391061 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-391061 status --alsologtostderr: exit status 7 (92.570909ms)

                                                
                                                
-- stdout --
	multinode-391061
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-391061-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:08:49.605618   30569 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:08:49.605770   30569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:08:49.605783   30569 out.go:309] Setting ErrFile to fd 2...
	I1101 00:08:49.605791   30569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:08:49.605975   30569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-7251/.minikube/bin
	I1101 00:08:49.606135   30569 out.go:303] Setting JSON to false
	I1101 00:08:49.606167   30569 mustload.go:65] Loading cluster: multinode-391061
	I1101 00:08:49.606299   30569 notify.go:220] Checking for updates...
	I1101 00:08:49.606750   30569 config.go:182] Loaded profile config "multinode-391061": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1101 00:08:49.606770   30569 status.go:255] checking status of multinode-391061 ...
	I1101 00:08:49.607233   30569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:08:49.607337   30569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:49.621450   30569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43851
	I1101 00:08:49.621871   30569 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:49.622361   30569 main.go:141] libmachine: Using API Version  1
	I1101 00:08:49.622384   30569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:49.622718   30569 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:49.622885   30569 main.go:141] libmachine: (multinode-391061) Calling .GetState
	I1101 00:08:49.624234   30569 status.go:330] multinode-391061 host status = "Stopped" (err=<nil>)
	I1101 00:08:49.624248   30569 status.go:343] host is not running, skipping remaining checks
	I1101 00:08:49.624254   30569 status.go:257] multinode-391061 status: &{Name:multinode-391061 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 00:08:49.624298   30569 status.go:255] checking status of multinode-391061-m02 ...
	I1101 00:08:49.624580   30569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1101 00:08:49.624618   30569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1101 00:08:49.638461   30569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33965
	I1101 00:08:49.638895   30569 main.go:141] libmachine: () Calling .GetVersion
	I1101 00:08:49.639345   30569 main.go:141] libmachine: Using API Version  1
	I1101 00:08:49.639368   30569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1101 00:08:49.639679   30569 main.go:141] libmachine: () Calling .GetMachineName
	I1101 00:08:49.639883   30569 main.go:141] libmachine: (multinode-391061-m02) Calling .GetState
	I1101 00:08:49.641577   30569 status.go:330] multinode-391061-m02 host status = "Stopped" (err=<nil>)
	I1101 00:08:49.641599   30569 status.go:343] host is not running, skipping remaining checks
	I1101 00:08:49.641605   30569 status.go:257] multinode-391061-m02 status: &{Name:multinode-391061-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-391061
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-391061-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-391061-m02 --driver=kvm2 : exit status 14 (84.596971ms)

                                                
                                                
-- stdout --
	* [multinode-391061-m02] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-391061-m02' is duplicated with machine name 'multinode-391061-m02' in profile 'multinode-391061'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-391061-m03 --driver=kvm2 
E1101 00:10:43.098342   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-391061-m03 --driver=kvm2 : (50.943155089s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-391061
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-391061: exit status 80 (251.69693ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-391061
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-391061-m03 already exists in multinode-391061-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-391061-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.33s)

                                                
                                    
x
+
TestPreload (176.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-069311 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1101 00:11:19.813886   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1101 00:12:11.648570   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-069311 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m29.7768813s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-069311 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-069311 image pull gcr.io/k8s-minikube/busybox: (1.385962276s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-069311
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-069311: (13.115726174s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-069311 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1101 00:13:34.692082   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-069311 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m11.4022512s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-069311 image list
helpers_test.go:175: Cleaning up "test-preload-069311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-069311
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-069311: (1.054955813s)
--- PASS: TestPreload (176.96s)

                                                
                                    
x
+
TestScheduledStopUnix (123.3s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-313402 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-313402 --memory=2048 --driver=kvm2 : (51.523253649s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-313402 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-313402 -n scheduled-stop-313402
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-313402 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-313402 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-313402 -n scheduled-stop-313402
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-313402
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-313402 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1101 00:15:43.098668   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-313402
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-313402: exit status 7 (76.48597ms)

                                                
                                                
-- stdout --
	scheduled-stop-313402
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-313402 -n scheduled-stop-313402
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-313402 -n scheduled-stop-313402: exit status 7 (76.134706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-313402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-313402
--- PASS: TestScheduledStopUnix (123.30s)

                                                
                                    
x
+
TestSkaffold (141.67s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1890254131 version
skaffold_test.go:63: skaffold version: v2.8.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-440701 --memory=2600 --driver=kvm2 
E1101 00:16:19.814020   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-440701 --memory=2600 --driver=kvm2 : (53.980027632s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1890254131 run --minikube-profile skaffold-440701 --kube-context skaffold-440701 --status-check=true --port-forward=false --interactive=false
E1101 00:17:06.144804   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:17:11.647186   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1890254131 run --minikube-profile skaffold-440701 --kube-context skaffold-440701 --status-check=true --port-forward=false --interactive=false: (1m15.77062089s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-c85cd4896-h5db8" [0d7b1ff1-8aa3-482d-a635-4da4f2f43ffa] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.016591088s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-dcdcbb68-28rms" [9d2eaf05-b162-4018-adee-7b96a5375cee] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009314035s
helpers_test.go:175: Cleaning up "skaffold-440701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-440701
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-440701: (1.151044092s)
--- PASS: TestSkaffold (141.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (184.41s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.2516820516.exe start -p running-upgrade-486790 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.2516820516.exe start -p running-upgrade-486790 --memory=2200 --vm-driver=kvm2 : (1m59.326870401s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-486790 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-486790 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m2.645633929s)
helpers_test.go:175: Cleaning up "running-upgrade-486790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-486790
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-486790: (2.151578702s)
--- PASS: TestRunningBinaryUpgrade (184.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (219.21s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-212565 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-212565 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (2m0.975741641s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-212565
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-212565: (10.445880013s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-212565 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-212565 status --format={{.Host}}: exit status 7 (96.590895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-212565 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
E1101 00:22:11.647822   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-212565 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (56.920655612s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-212565 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-212565 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-212565 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (138.045568ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-212565] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-212565
	    minikube start -p kubernetes-upgrade-212565 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2125652 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-212565 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-212565 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
E1101 00:23:19.764578   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:23:19.769880   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:23:19.780167   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:23:19.801017   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:23:19.841616   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:23:19.922429   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:23:20.082593   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:23:20.403669   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:23:21.044438   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:23:22.325483   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:23:24.885991   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:23:30.006655   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-212565 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (29.315624169s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-212565" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-212565
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-212565: (1.240365132s)
--- PASS: TestKubernetesUpgrade (219.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (212.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.2577609250.exe start -p stopped-upgrade-260652 --memory=2200 --vm-driver=kvm2 
E1101 00:21:19.814153   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.2577609250.exe start -p stopped-upgrade-260652 --memory=2200 --vm-driver=kvm2 : (2m20.704755819s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.2577609250.exe -p stopped-upgrade-260652 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.2577609250.exe -p stopped-upgrade-260652 stop: (13.084319334s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-260652 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E1101 00:24:00.728664   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-260652 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (59.056770026s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (212.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-260652
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-260652: (1.448975019s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                    
x
+
TestPause/serial/Start (80.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-710350 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-710350 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m20.031616583s)
--- PASS: TestPause/serial/Start (80.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-757160 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-757160 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (124.650878ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-757160] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-7251/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-7251/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (89.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-757160 --driver=kvm2 
E1101 00:25:43.098634   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:26:03.610152   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-757160 --driver=kvm2 : (1m29.036917455s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-757160 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (89.38s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (59.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-710350 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-710350 --alsologtostderr -v=1 --driver=kvm2 : (59.017117812s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (59.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m22.166172766s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (106.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m46.49305947s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (106.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (36.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-757160 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-757160 --no-kubernetes --driver=kvm2 : (34.623012421s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-757160 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-757160 status -o json: exit status 2 (321.377442ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-757160","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-757160
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-757160: (1.268860457s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (36.21s)

                                                
                                    
x
+
TestPause/serial/Pause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-710350 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-710350 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-710350 --output=json --layout=cluster: exit status 2 (363.066663ms)

                                                
                                                
-- stdout --
	{"Name":"pause-710350","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-710350","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-710350 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-710350 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.22s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-710350 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-710350 --alsologtostderr -v=5: (1.21981937s)
--- PASS: TestPause/serial/DeletePaused (1.22s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (119.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E1101 00:27:11.647528   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m59.852970892s)
--- PASS: TestNetworkPlugins/group/calico/Start (119.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-757160 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-757160 --no-kubernetes --driver=kvm2 : (52.768213195s)
--- PASS: TestNoKubernetes/serial/Start (52.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-925990 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-925990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5lhnf" [d64ab092-748c-4b99-be15-6058584b0cee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 00:27:47.994296   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
E1101 00:27:47.999641   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
E1101 00:27:48.009996   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
E1101 00:27:48.030342   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
E1101 00:27:48.070699   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
E1101 00:27:48.151059   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
E1101 00:27:48.311488   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
E1101 00:27:48.632085   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
E1101 00:27:49.272521   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
E1101 00:27:50.553100   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-5lhnf" [d64ab092-748c-4b99-be15-6058584b0cee] Running
E1101 00:27:53.113715   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.011696919s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-925990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1101 00:27:58.234362   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (88.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E1101 00:28:19.764860   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m28.450244424s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (88.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7t25r" [57105b14-83e9-41df-8246-f65abbf4e1bd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.024994387s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-925990 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-925990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z2b9z" [0c9b9af4-1cd1-4147-98fe-cfeee751e894] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 00:28:28.956243   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-z2b9z" [0c9b9af4-1cd1-4147-98fe-cfeee751e894] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.014522675s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-757160 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-757160 "sudo systemctl is-active --quiet service kubelet": exit status 1 (244.933742ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-757160
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-757160: (2.293017958s)
--- PASS: TestNoKubernetes/serial/Stop (2.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-757160 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-757160 --driver=kvm2 : (43.168052524s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-925990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (118.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1101 00:29:09.916701   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m58.507055136s)
--- PASS: TestNetworkPlugins/group/false/Start (118.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5ctmv" [a34f1038-6dd4-40e0-94f0-a5a8c7f20979] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.028440929s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-925990 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-925990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lkgz4" [28ff3610-1248-4f80-9d81-be819727d598] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lkgz4" [28ff3610-1248-4f80-9d81-be819727d598] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.011651815s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-757160 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-757160 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.341641ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (104.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m44.053915311s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (104.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-925990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-925990 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (15.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-925990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-plj9p" [4f06f8e5-dad2-4e10-98d7-ea4a2e2a6b98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-plj9p" [4f06f8e5-dad2-4e10-98d7-ea4a2e2a6b98] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 15.01697185s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (15.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (111.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m51.798367794s)
--- PASS: TestNetworkPlugins/group/flannel/Start (111.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-925990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1101 00:30:31.837308   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
E1101 00:30:43.098295   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m41.107503659s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-925990 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-925990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hlm7g" [fc38b351-b9f1-40ed-bedb-dac2e006dea2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hlm7g" [fc38b351-b9f1-40ed-bedb-dac2e006dea2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.012941171s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-925990 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-925990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n4z9n" [8356c69c-9461-4bea-9cb7-659ed55ce8b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n4z9n" [8356c69c-9461-4bea-9cb7-659ed55ce8b4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.016011862s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-925990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-925990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (81.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-925990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m21.819928881s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (81.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (174.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-993392 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-993392 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m54.980953002s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (174.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6xhnm" [5b7c2426-5b2a-4bd7-8dc6-3185720b0933] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.021746038s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-925990 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-925990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8kqfv" [11f61ee3-a471-4476-9e60-b49dfd172f6b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8kqfv" [11f61ee3-a471-4476-9e60-b49dfd172f6b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.013252189s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-925990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-925990 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-925990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9dmtj" [1d03bb28-2bdd-42f5-84c7-f9992d32764d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9dmtj" [1d03bb28-2bdd-42f5-84c7-f9992d32764d] Running
E1101 00:32:11.647993   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.022384065s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-925990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (135.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-658664 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-658664 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (2m15.289894461s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (135.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (132.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-503881 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
E1101 00:32:45.744742   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:32:45.750041   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:32:45.760363   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:32:45.780724   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:32:45.821650   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:32:45.901889   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:32:46.062076   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:32:46.383049   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:32:47.024307   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:32:47.994354   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
E1101 00:32:48.304975   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:32:50.865508   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-503881 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (2m12.477426017s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (132.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-925990 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-925990 replace --force -f testdata/netcat-deployment.yaml
E1101 00:32:55.985728   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jld8n" [e4950b7b-89cd-4254-b56e-7eb1b78cf7a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jld8n" [e4950b7b-89cd-4254-b56e-7eb1b78cf7a8] Running
E1101 00:33:06.226789   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.012398601s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-925990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-925990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.22s)
E1101 00:39:11.347306   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:39:18.145579   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:39:25.696084   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
E1101 00:39:32.537041   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:39:32.542322   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:39:32.552642   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:39:32.573241   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:39:32.613593   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:39:32.693966   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:39:32.854382   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:39:33.175526   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:39:33.816058   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:39:35.097252   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:39:37.658146   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:39:39.031940   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:39:42.778442   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:39:42.811675   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:39:45.904898   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:39:46.705373   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:39:53.019286   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:40:13.499513   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
E1101 00:40:14.389327   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-195256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1101 00:33:28.383888   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kindnet-925990/client.crt: no such file or directory
E1101 00:33:33.504822   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kindnet-925990/client.crt: no such file or directory
E1101 00:33:43.832738   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kindnet-925990/client.crt: no such file or directory
E1101 00:33:46.145940   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:34:04.313959   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kindnet-925990/client.crt: no such file or directory
E1101 00:34:07.668151   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:34:11.347635   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:34:11.352965   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:34:11.363316   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:34:11.383657   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:34:11.424005   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:34:11.504393   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:34:11.664875   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:34:11.985551   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:34:12.626540   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:34:13.906889   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:34:16.467213   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:34:21.588278   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:34:31.828581   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-195256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (1m13.621204903s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-993392 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2a145c06-0f3c-49a5-826d-94480900b4af] Pending
helpers_test.go:344: "busybox" [2a145c06-0f3c-49a5-826d-94480900b4af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2a145c06-0f3c-49a5-826d-94480900b4af] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.040727383s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-993392 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-658664 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ee6accc2-04ef-4880-8145-22873fb0cb19] Pending
helpers_test.go:344: "busybox" [ee6accc2-04ef-4880-8145-22873fb0cb19] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ee6accc2-04ef-4880-8145-22873fb0cb19] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.035970763s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-658664 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-195256 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d196709d-5e15-4d8f-a6b2-415562216f77] Pending
helpers_test.go:344: "busybox" [d196709d-5e15-4d8f-a6b2-415562216f77] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d196709d-5e15-4d8f-a6b2-415562216f77] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.02706041s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-195256 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-993392 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-993392 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-993392 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-993392 --alsologtostderr -v=3: (13.144665102s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-658664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-658664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.057171285s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-658664 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-658664 --alsologtostderr -v=3
E1101 00:34:45.274592   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kindnet-925990/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-658664 --alsologtostderr -v=3: (13.126743326s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-503881 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e103fe81-baae-4a57-aa14-58e6c7193955] Pending
helpers_test.go:344: "busybox" [e103fe81-baae-4a57-aa14-58e6c7193955] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1101 00:34:46.705579   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:34:46.710870   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:34:46.721101   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:34:46.741368   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:34:46.781687   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:34:46.861880   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:34:47.022611   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:34:47.342886   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:34:47.983641   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:34:49.264741   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e103fe81-baae-4a57-aa14-58e6c7193955] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.030518919s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-503881 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-195256 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-195256 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.066502981s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-195256 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-195256 --alsologtostderr -v=3
E1101 00:34:51.825479   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:34:52.309646   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-195256 --alsologtostderr -v=3: (13.131507406s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-503881 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-503881 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.060234146s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-503881 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-993392 -n old-k8s-version-993392
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-993392 -n old-k8s-version-993392: exit status 7 (76.374348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-993392 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (94.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-993392 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-993392 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (1m34.135258813s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-993392 -n old-k8s-version-993392
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (94.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-503881 --alsologtostderr -v=3
E1101 00:34:56.946192   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-503881 --alsologtostderr -v=3: (13.133460618s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658664 -n no-preload-658664
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658664 -n no-preload-658664: exit status 7 (83.853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-658664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (325.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-658664 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-658664 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (5m25.128974342s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658664 -n no-preload-658664
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (325.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-195256 -n default-k8s-diff-port-195256
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-195256 -n default-k8s-diff-port-195256: exit status 7 (98.337068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-195256 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-195256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1101 00:35:07.186733   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-195256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (5m38.222918487s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-195256 -n default-k8s-diff-port-195256
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-503881 -n embed-certs-503881
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-503881 -n embed-certs-503881: exit status 7 (81.958235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-503881 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (383.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-503881 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
E1101 00:35:27.666967   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:35:29.588518   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:35:33.270332   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:35:43.097740   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
E1101 00:36:00.760537   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:00.765872   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:00.776183   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:00.796506   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:00.836857   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:00.917320   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:01.077760   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:01.398350   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:02.039297   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:03.319512   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:03.471051   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:36:03.476377   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:36:03.486703   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:36:03.507041   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:36:03.547361   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:36:03.627560   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:36:03.788109   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:36:04.109047   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:36:04.750238   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:36:05.879931   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:06.031257   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:36:07.195758   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kindnet-925990/client.crt: no such file or directory
E1101 00:36:08.592017   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:36:08.627885   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:36:11.000673   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:13.712388   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:36:19.814321   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/addons-424039/client.crt: no such file or directory
E1101 00:36:21.241388   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:36:23.952773   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-503881 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (6m23.416425753s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-503881 -n embed-certs-503881
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (383.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-98lbc" [6992a869-7788-49b9-8fc4-9302292db59f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019176847s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-98lbc" [6992a869-7788-49b9-8fc4-9302292db59f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012390328s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-993392 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-993392 --alsologtostderr -v=1
E1101 00:36:43.131684   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-993392 -n old-k8s-version-993392
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-993392 -n old-k8s-version-993392: exit status 2 (304.151191ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-993392 -n old-k8s-version-993392
E1101 00:36:44.412479   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-993392 -n old-k8s-version-993392: exit status 2 (305.189902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-993392 --alsologtostderr -v=1
E1101 00:36:44.433174   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-993392 -n old-k8s-version-993392
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-993392 -n old-k8s-version-993392
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (71.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-983699 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
E1101 00:36:52.093882   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
E1101 00:36:55.191264   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/calico-925990/client.crt: no such file or directory
E1101 00:37:02.061640   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:02.066983   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:02.077353   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:02.097691   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:02.138025   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:02.218402   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:02.334723   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
E1101 00:37:02.378924   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:02.699544   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:03.340727   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:04.620945   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:07.181591   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:11.647196   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/functional-238689/client.crt: no such file or directory
E1101 00:37:12.302095   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:22.543103   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:22.682025   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:37:22.815603   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
E1101 00:37:25.393906   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:37:30.548947   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/custom-flannel-925990/client.crt: no such file or directory
E1101 00:37:43.023732   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:37:45.744282   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
E1101 00:37:47.994145   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/gvisor-273376/client.crt: no such file or directory
E1101 00:37:56.222157   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:37:56.227477   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:37:56.237815   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:37:56.258150   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:37:56.298459   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:37:56.378850   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:37:56.539451   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:37:56.860017   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:37:57.500964   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:37:58.781752   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-983699 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (1m11.608772094s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (71.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-983699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-983699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.047012377s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-983699 --alsologtostderr -v=3
E1101 00:38:01.342454   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:38:03.775753   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
E1101 00:38:06.463653   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-983699 --alsologtostderr -v=3: (13.137214783s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-983699 -n newest-cni-983699
E1101 00:38:13.429356   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/auto-925990/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-983699 -n newest-cni-983699: exit status 7 (85.643994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-983699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (47.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-983699 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
E1101 00:38:16.704294   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:38:19.764690   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/skaffold-440701/client.crt: no such file or directory
E1101 00:38:23.257015   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kindnet-925990/client.crt: no such file or directory
E1101 00:38:23.984221   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/bridge-925990/client.crt: no such file or directory
E1101 00:38:37.185367   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kubenet-925990/client.crt: no such file or directory
E1101 00:38:44.603163   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/false-925990/client.crt: no such file or directory
E1101 00:38:47.315168   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/enable-default-cni-925990/client.crt: no such file or directory
E1101 00:38:51.036955   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/kindnet-925990/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-983699 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (46.84413507s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-983699 -n newest-cni-983699
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (47.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-983699 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-983699 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-983699 -n newest-cni-983699
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-983699 -n newest-cni-983699: exit status 2 (273.972223ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-983699 -n newest-cni-983699
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-983699 -n newest-cni-983699: exit status 2 (273.181196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-983699 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-983699 -n newest-cni-983699
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-983699 -n newest-cni-983699
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2wcv6" [5d2de3b0-3b4d-424e-94f3-92b03800ae6f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01823089s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2wcv6" [5d2de3b0-3b4d-424e-94f3-92b03800ae6f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011916203s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-658664 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-658664 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-658664 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-658664 -n no-preload-658664
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-658664 -n no-preload-658664: exit status 2 (259.908901ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-658664 -n no-preload-658664
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-658664 -n no-preload-658664: exit status 2 (261.919164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-658664 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-658664 -n no-preload-658664
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-658664 -n no-preload-658664
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6tjdm" [8f3b7c29-9553-4226-baa7-83740074a118] Running
E1101 00:40:43.097678   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/ingress-addon-legacy-779845/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025890596s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6tjdm" [8f3b7c29-9553-4226-baa7-83740074a118] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013119656s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-195256 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-195256 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-195256 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-195256 -n default-k8s-diff-port-195256
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-195256 -n default-k8s-diff-port-195256: exit status 2 (273.672314ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-195256 -n default-k8s-diff-port-195256
E1101 00:40:54.459998   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/old-k8s-version-993392/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-195256 -n default-k8s-diff-port-195256: exit status 2 (270.114656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-195256 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-195256 -n default-k8s-diff-port-195256
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-195256 -n default-k8s-diff-port-195256
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7vqsx" [e40c5a24-2434-48c6-92c2-a5274d4208b2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017489545s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7vqsx" [e40c5a24-2434-48c6-92c2-a5274d4208b2] Running
E1101 00:41:41.849759   14463 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-7251/.minikube/profiles/flannel-925990/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012056036s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-503881 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-503881 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-503881 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-503881 -n embed-certs-503881
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-503881 -n embed-certs-503881: exit status 2 (259.734473ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-503881 -n embed-certs-503881
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-503881 -n embed-certs-503881: exit status 2 (258.480719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-503881 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-503881 -n embed-certs-503881
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-503881 -n embed-certs-503881
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.53s)

                                                
                                    

Test skip (31/321)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-925990 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-925990" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-925990

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-925990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-925990"

                                                
                                                
----------------------- debugLogs end: cilium-925990 [took: 3.522751291s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-925990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-925990
--- SKIP: TestNetworkPlugins/group/cilium (3.67s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-256146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-256146
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard